Game Theory: An Introductory Sketch


 

 

Some Examples of Games with More Complex Structures

All of the game examples so far are relatively simple in that time plays no part in them, however complex they may be in other ways. The passage of time can make at least three kinds of differences. First, people may learn -- new information may become available, that affects the payoffs their strategies can give. Second, even when people do not or cannot commit themselves at first, they may commit themselves later -- and they may have to decide when and if to commit. Of course, this blurs the distinction we have so carefully set up between cooperative and noncooperative games, but life is like that. Third, there is the possibility of retaliation against other players who fail to cooperate with us. That, too, blurs the cooperative-noncooperative distinction. That means, in particular, that repeated games -- and particularly repeated prisoners' dilemmas -- may have quite different outcomes than they do when they are played one-off. But we shall leave the repeated games out as an advanced topic and move on to study sequential games and the problems that arise when people can make commitments only in stages, at different points in the game. I personally find these examples interesting and to the point and they are somewhat original.

There are some surprising results. One surprising result is that, in some games, people are better off if they can give up some of their freedom of choice, binding themselves to do things at a later stage in the came that may not look right when they get to this stage. An example of this (I suggest) is to be found in Marriage Vows. This provides a good example of what some folks call "economic imperialism" -- the use of economics (and game theory) to explain human behavior we do not usually think of as economic, rational, or calculating -- although you do not really need to know any economics to follow the example in Marriage Vows. Another example along the same line (although the main application is economics in a more conventional sense) is The Paradox of Benevolent Authority, which tries to capture, in game-theoretic terms, a reason why liberal societies often try to constrain their authorities rather than relying on their benevolence.

Also, the following example will have to do with relations between an employer and an employee: A Theory of Burnout. For an example in which flexibility is important, so that giving up freedom of choice is a bad idea, and another non-imperialistic economic application of game theory, see The Essence of Bankruptcy. Of course, that note is meant to discuss bankruptcy, not to exemplify it!

Game Theory: An Introductory Sketch

A Theory of Marriage Vows

This example is an attempt to use game theory to "explain" marriage vows. But first (given the nature of the topic) it might be a good idea to say something about "explanation" using game theory.

One possible objection is that marriage is a very emotional and even spiritual topic, and game theory doesn't say anything about emotions and spirit. Instead game theory is about payoffs and strategies and rationality. That's true, but it may be that the specific phenomenon -- the taking of vows that (in some societies, at least) restrict freedom of choice -- may have more to do with payoffs and strategies than with anything else, and may be rational. In that case, a game-theoretic model may capture the aspects that are most relevant to the institution of marriage vows. Second, game-theoretic explanations are never conclusive. The most we can say is that we have a game-theoretic model, with payoffs and strategies like this, that would lead rational players to choose the strategies that, in the actual world, they seem to choose. It remains possible that their real reasons are different and deeper, or irrational and emotional. That's no less true of bankruptcy than of marriage. Indeed, from some points of view, their "real reasons" have to be deeper and more complex -- no picture of the world is ever "complete." The best we can hope for is a picture that fits fairly well and contains some insight. I think game theory can "explain" marriage vows in this sense.

In some sequential games, in which the players have to make decisions in sequence, freedom of choice can be a problem. These are games that give one or more players possibilities for "opportunism." That is, some players are able to make their decisions in late stages of the game in ways that exploit the decisions made by others in early stages. But those who make the decisions in the early stages will then avoid decisions that make them vulnerable to opportunism, with results that can be inferior all around. In these circumstances, the potential opportunist might welcome some sort of restraint that would make it impossible for him to act opportunistically at the later stage. Jon Elster made the legend of "Ulysses and the Sirens" a symbol for this. Recall, in the legend, Ulysses wanted to hear the sirens sing; but he knew that a person who would hear them would destroy himself trying to go to the sirens. Thus, Ulysses decided at the first stage of the game to have himself bound to the mast, so that, at the second stage, he would not have the freedom to choose self-destruction. Sequential games are a bit different from that, in that they involve interactions of two or more people, but the games of sequential commitment can give players reason to act as Ulysses did -- that is, to rationally choose at the first step in a way that would limit their freedom of choice at the second step. That is our strategy in attempting to "explain" marriage vows.

Here is the "game." At the first stage, two people get together. They can either stay together for one period or two. If they take a vow, they are committed to stay together for both periods. During the first period, each person can choose whether or not to "invest in the relationship." "Investing in the relationship" means making a special effort in the first period that will only yield the investor benefits in the second period, and will yield benefits in the second period only if the couple stay together. At the end of the first period, if there has been no vow, each partner decides whether to remain together for the second period or separate. If either prefers to separate, then separation occurs; but if both choose to remain together, they remain together for the second period. Payoffs in the second period depend on whether the couple separate, and, if they stay together, on who invested in the first period.

The payoffs are determined as follows: First, in the first stage, the payoff to one partner is 40, minus 30 if that partner "invests in the relationship," plus 20 if the other partner "invests in the relationship." Thus, investment in the relationship is a loss in the first period -- that's what makes it "investment." In the second period, if they separate, both partners get additional payoffs of 30. Thus, each partner can assure himself or herself of 70 by not investing and then separating. However, if they stay together, each partner gets an additional payoff of 20 plus (if only the other partner invested) 25 or (if both partners invested) 60.

Notice that the total return to investment to the couple over both periods is disproportionately greater if both persons invest -- that is, it is 2*20-2*30 in the first period plus 2*60 = 80 if both invest, but is 20-30+25=15 if only one invests. The difference 80-2*15=50 reflects the assumption that the investments are complementary -- that each partner's investment reinforces and increases the productivity of the other person's investment.

These ground rules lead to the payoffs in Table 15-1, in which "his" payoffs are to the right in each pair and "hers" are to the left.

Table 15-1

 

him

her

invest

don't invest

invest/

stay

separate

stay

separate

stay

110, 110

60, 60

30, 105

40, 115

separate

60, 60

60, 60

40, 115

40, 115

don't invest/

 

 

 

 

stay

105, 30

115, 40

60, 60

70, 70

separate

115, 40

115, 40

70, 70

70, 70

Since the decision to invest (or not) precedes the decision to separate (or not) we have to work backward to solve this game. Suppose that there are no vows and both partners invest. Then we have the subgame in the upper left quarter of the table:

110, 110

60, 60

60, 60

60, 60

Clearly, in this subgame, to remain together is a dominant strategy for both partners, so we can identify 110, 110 as the payoffs that will in fact occur in case both partners invest.

Now take the other symmetrical case and suppose that neither partner invests. We then have the subgame at the lower right:

60, 60

70, 70

70, 70

70, 70

Here, again, we have a clear dominant strategy, and it is to separate. The payoffs of symmetrical non-investment are thus 70,70.

Now suppose that only one partner invests, and (purely for illustrative purposes!) we consider the case in which "he" invests and "she" does not. We then have the subgame at the lower right:

105, 30

115, 40

115, 40

115, 40

Here again, separation is a dominant strategy, so the payoffs for the subgame where "she" invests and "he" does not are 115,40. A symmetrical analysis will give us payoffs of 40, 115 when "she" invests and "he" does not.

Putting these subgame outcomes together in a payoff table for the decision to invest or not invest we have:

Table 15-2

 

he

 

invest

don't invest

she

 

invest

110, 110

40, 115

don't invest

115, 40

70, 70

This game resembles the Prisoners' Dilemma, in that non-investment is a dominant strategy, but when both players play their dominant strategies, both are worse off than they would be if both played the non-dominant strategy. Anyway, we identify 70, 70 as the subgame perfect equilibrium in the absence of marriage vows.

But now suppose that, back at the beginning of things, the pair have the option to take, or not to take, a vow to stay together regardless. If they take the vow, only the "stay together" payoffs would remain as possibilities. If they do not take the vow, we know that there will be a separation and no investment, so we need consider only that possibility. In effect, there are three strategies: take a vow and invest, take a vow and don't invest, or don't take a vow. We have

Table 15-3

 

he

 

vow & invest

vow & don't invest

don't vow

she

 

 

 

vow & invest

110, 110

30, 105

70, 70

vow & don't invest

105, 30

60, 60

70, 70

don't vow

70, 70

70, 70

70, 70

In this game, there is no dominant strategy. However, the only Nash equilibrium is for each player to take the vow and invest, and thus the payoff that will occur if a vow can be taken is at the upper left -- 110, 110, the "efficient" outcome. In effect, willingness to take the vow is a "signal" that the partner intends to invest in the relationship -- if (s)he didn't, it would make more sense for him (her) to avoid the vow. Both partners are better off if the vow is taken, and if they had no opportunity to bind themselves with a vow, they could not attain the blissful outcome at the upper left.

Thus, when each partner decides whether or not to take the vow, each rationally expects a payoff of 110 if the vow is taken and 70 if not, and so, the rational thing to do is to take the vow. Of course, this depends strictly on the credibility of the commitment. In a world in which marriage vows become of questionable credibility, this reasoning breaks down, and we are back at Table 2, the Prisoners' Dilemma of "investment in the relationship." Some sort of first-stage commitment is necessary. Perhaps emotional commitment will be enough to make the partnership permanent -- emotional commitment is one of the things that is missing from this example. But emotional commitment is hard to judge. One of the things a credible vow does is to signal emotional commitment. If there are no vows that bind, how can emotional commitment be signaled? That seems to be one of the hard problems of living in modern society!

There is a lot of common sense here that your mother might have told you -- anyway my mother would have! What the game-theoretic analysis gives us is an insight on why Mom was right, after all, and how superficial reasoning can mislead us. As we compare tables 2 and 3, we can observe that -- given the choices made, that is, reading down a column or across a row -- no-one is ever better off with Table 3 (vow) than with Table 2 (no vow). And except for the upper left quadrant, both parties are worse off with the vow than without it. Thus I might reason -- wrongly! -- that since, ceteris paribus, I am better off with freedom of choice than without it, I had best not take the vow. But this illustrates a pitfall of "ceteris paribus" reasoning. In this comparison, ceteris are not paribus. Rather, the outcomes of the various subgames -- "ceteris" -- depend on the payoff possibilities as a whole. The vow changes the whole set of payoff possibilities in such a way that "ceteris" are changed -- non paribus -- and the outcome improved. The set of possible outcomes is worse but the selection of outcomes among the available set is so much improved that both parties are almost twice as well off as they would be had they not agreed to restrain their freedom of choice.

In other words: Cent' Anni!

The Paradox of Benevolent Authority

The "Prisoners' Dilemma" is without doubt the most influential single analysis in Game Theory, and many social scientists, philosophers and mathematicians have used it as a justification for interventions by governments and other authorities to limit individual choice. After all, in the Prisoners' Dilemma, rational self-interested individual choice makes both parties worse off. A difficulty with this sort of reasoning is that it treats the authority as a deus ex machina -- a sort of predictable, benevolent robot who steps in and makes everything right. But a few game theorists and some economists (influenced by Game Theory but not strictly working in the Game Theoretic framework) have pointed out that the authority is a player in the game, and that makes a difference. This essay will follow that line of thought in an explicitly Game-Theoretic (but very simple) frame, beginning with the Prisoners' Dilemma. Since we begin with a Prisoners' Dilemma, we have two participants, whom we will call "commoners," who interact in a Prisoners' Dilemma with payoffs as follows:

Table 16-1

 

 

Commoner 1

 

 

cooperate

defect

Commoner 2

cooperate

10,10

0,15

defect

15,0

5,5

The third player in this game is the "authority," and she (or he) is a very strange sort of player. She can change the payoffs to the commoners. The authority has two strategies, "penalize" or "don't penalize." If she chooses "penalize," the payoffs to the two commoners are reduced by 7. If she chooses "don't penalize," there is no change in the payoffs to the two commoners.

The authority also has two other peculiar characteristics:

Now suppose that the authority chooses the strategy "penalize" if, and only if, one or both of the commoners chooses the strategy "defect." The payoffs to the commoners would then be

Table 16-2

 

 

Commoner 1

 

 

cooperate

defect

Commoner 2

cooperate

10,10

-7,8

defect

8,-7

-2,-2

But the difficulty is that this does not allow for the authority's flexibility and benevolence. Is that indeed the strategy the authority will choose? The strategy choices are shown as a tree in Figure 1 below. In the diagram, we assume that commoner 1 chooses first and commoner 2 second. In a Prisoners' Dilemma, it doesn't matter which participant chooses first, or they both choose at the same time. What is important is that the authority chooses last.

Figure 16-1

What we see in the figure is that the authority has a dominant strategy: not to penalize. No matter what the two commoners choose, imposing a penalty will make them worse off, and since the authority is benevolent -- she "feels their pain," her payoffs being the sum total of theirs -- she will always have an incentive to let them off, not to penalize. But the result is that she cannot change the Prisoners Dilemma. Both commoners will choose "defect," the payoffs will be (5,5) for the commoners, and 10 for the authority.

Perhaps the authority will announce that she intends to punish the commoners if they choose "defect." But they will not be fooled, because they know that, whatever they do, punishment will reduce the payoff to the authority herself, and that she will not choose a strategy that reduces her payoffs. Her announcements that she intends to punish will not be credible.

EXERCISE In this example, a punishment must fall on both commoners, even if only one defects. Does this make a difference for the result? Assume instead that the authority can impose a penalty on one and not the other, so that the authority has 4 strategies: no penalty, penalize commoner 1, penalize commoner 2, penalize both. What are the payoffs to the authority in the sixteen possible outcomes that we now have? Under what circumstances will a benevolent authority penalize? What are the equilibrium outcomes in this more complicated game?

There are two ways to solve this problem. First, the authority might not be benevolent. Second, the authority might not be flexible.

Non-benevolent authority:

We might change the payoffs to the authority so that the authority no longer "feels the pain" of the commoners. For example, make the payoff to the authority 1 if both commoners cooperate and zero otherwise. We might call an authority with a payoff system like this a "Prussian" authority, since she values "order" regardless of the consequences for the people, an attitude sometimes associated with the Prussian state. She then has nothing to lose by penalizing the commoners whenever there is defection, and announcements that she will penalize defection become credible. EXERCISE Suppose the authority is sadistic; that is, the authority's payoff is 1 if a penalty is imposed and 0 otherwise. What will be the game equilibrium in this case?

Non-flexible authority:

If the authority can somehow commit herself to imposing the penalty in some cases and not in others, perhaps by posting a bond greater than the 15 point cost of a penalty, then the announcement of an intention to penalize would become credible. The announcement and commitment would then be a strategy choice that the authority would make first, rather than last. Let's say that at the first step, the authority has two strategies: commit to a penalty whenever any commoner chooses "defect," or don't commit. We then have a tree diagram like Figure 2. What we see in Figure 2 is that if the authority commits, the outcome will be cooperation and a payoff of 20 for her, at the top; but if she does not commit, the outcome will be at the bottom -- both commoners defect and the payoff will be -4 for the authority. So the authority will choose the strategy of commitment, if she can, and in that case the rational, self-interested action of the commoners will lead to cooperation and good results. But, if the commoners irrationally defect, or if they don't believe the commitment and defect for that reason, then the authority is boxed in. She has to impose a penalty even though it makes everyone worse off. In short, she cannot be flexible.

Figure 16-2

What we have seen here are two principles that play an important part in modern macroeconomics. Many modern economists apply these principles to the central banks that control the money supply in modern economies. They are

The principle of "rules rather than discretion."

That is, the authority should act according to rules chosen in advance, rather than responding flexibly to events as they occur. In the case of the central banks, they should control the money supply or the interest rate on public debt (there is controversy about which) according to some simple rule, such as increasing the money supply at a steady rate or raising the interest rate when production is close to capacity, to prevent inflation. If some groups in the economy push their prices up, the monetary authority might be tempted to print money, which would cause inflation and help other groups to catch up with their prices, and perhaps reduce unemployment. But this must be avoided, since the groups will come to anticipate it and just push their prices up all the faster.

The principle of credibility.

It is not enough for the authority to be committed to the simple rule. The commitment must be credible if the rule is to have its best effect.

The difficulty is that it may be difficult for the authority to commit itself and to make the commitment credible. This can be illustrated by another application: dealing with terrorism. Some governments have taken the position that they will not negotiate with terrorists who take hostages, but when the terrorists actually have hostages, the pressure to make some sort of a deal can be very strong. What is to prevent a sensitive government from caving in -- just this once, of course! And potential terrorists know those pressures exist, so that the commitments of governments may not be credible to them, even when the governments have a "track record" of being tough.

This may have an effect on the way we want our institutions to function, at the most basic, more or less constitutional level. For example, in countries with strong currencies, like Germany and the United States, the central bank or monetary authority is strongly insulated from democratic politics. This means that the pressures for a more "flexible" policy expressed by voters are not transmitted to the monetary authority -- or, anyway, they are not as strong as they might otherwise be -- so the monetary authority is more likely to commit itself to a simple rule and the commitment will be more credible.

Are these "conservative" or "liberal" ideas? Some would say that they are conservative rather than liberal, on the grounds that liberals believe in flexibility -- considering each case on its own merits, and making the best decision in the circumstances, regardless of unthinking rules. But it may be a little more complex than that. This and the previous essay have considered particular cases in which commitment and rules work better than flexibility. There may be many other cases in which flexibility is needed. I should think that the "liberal" approach would be to consider the case for commitment and for rules rather than discretion on its merits in each instance, rather than relying on an unthinking rule against rules! Anyway, conservative or liberal or radical (as it could be!), the theory of games in extended form is now a key tool for understanding the role of commitment and rules in any society.

A Theory of Burnout

As an illustration of the concepts of sequential games and subgame perfect equilibrium, we shall consider a case in the employment relationship. This game will be a little richer in possibilities than the economics textbook discussion of the supply and demand for labor, in that we will allow for two dimensions of work the principles course does not consider: variable effort and the emotional satisfactions of "meaningful work." We also allow for a sequence of more or less reliable commitments in the choice of strategies.

We consider a three-stage game. At the first stage, one player in the game, the "worker," must choose between two kinds of strategies, that is, two "jobs." In either job, the worker will later have to choose between two rates of effort, "high" and "low." In either job, the output is 20 in the case of high effort and 10 if effort is low. We suppose that the first job is a "meaningful job," in the sense that it meets needs with which the worker sympathizes. As a consequence of this, the worker "feels the pain" of unmet needs when her or his output falls below the potential output of 20. This reduces her or his utility payoff when she or he shirks at the lower effort level. Of course, her or his utility also depends on the wage and (negatively) on effort. Accordingly, in Job 1 the worker's payoff is

wage - 0.3(20-output) - 2(effort)  where effort is zero or one. The other job is "meaningless," so that the worker's utility does not depend on output, and in this job it is  wage - 2(effort)

At the second stage of the game the other player, the "employer," makes a commitment to pay a wage of either 10 or 15. Finally, the worker chooses an effort level, either 0 or 1.

The payoffs are shown in Table 17-1.

Table 17-1

 

 

 

Job

 

 

 

1

2

 

 

effort

0

1

0

1

wage

high

 

-5, 12

5, 13

-5,15

5,13

low

 

0,7

10,8

0,10

10,8

In each cell of the matrix, the worker's payoff is to the right of the comma and the employer's to the left. Let us first see what is "efficient" here. The payoffs are shown in Figure 1. Payoffs to the employer are on the vertical axis and those to the worker on the horizontal axis. Possible payoff pairs are indicated by stars-of-David. In economics, a payoff pair is said to be "efficient," or equivalently, "Pareto-optimal," if it is not possible to make one player better off without making the other player worse off. The pairs labeled A, B, and C have that property. They are (10,8), (5,13) and (-5,15). The others are inefficient. The red line linking A, B, and C is called the utility possibility frontier. Any pairs to the left of and below it are inefficient.

Figure 17-1: Game Outcomes

Now let us explore the subgame perfect equilibrium of this model. First, we may see that the low wage is a "dominant strategy" for the employer. That is, regardless which strategy the worker chooses -- job 1 and low effort, job 2 and high effort, and so on -- the employer is better off with low wages than with high. Thus the worker can anticipate that the wages will be low. Let us work backward. Suppose that the worker chooses job 2 at the first stage. This limits the game to the right-hand side of the table, which has a structure very much like the Prisoners' Dilemma. In this subgame, both players have dominant strategies. The worker's dominant strategy is low effort, and the Prisoners' Dilemma-like outcome is at (0,10). This is the outcome the worker must anticipate if he chooses Job 2.

What if he chooses Job 1? Then the game is limited to the left-hand side. In this game, too, the worker, like the employer, has a dominant strategy, but in this case it is high effort. This subgame is not Prisoners' Dilemma-like, since the equilibrium -- (10,8) -- is an efficient one. This is the outcome the worker must expect if she or he chooses Job 1, "meaningful work."

But the worker is better off in the subgame defined by "nonmeaningful work," Job 2. Accordingly, she will choose Job 2, and thus the equilibrium of the game as a whole (the subgame perfect equilibrium) is at (0,10). It is indicated by point E in the figure, and is inefficient.

Why is meaningful work not chosen in this model? It is not chosen because there is no effective reward for effort. With meaningful work, the worker can make no higher wage, despite her greater effort. Yet she does not reduce her effort because doing so brings the greater utility loss of seeing the output of meaningful work decline on account of her decision. The dilemma of having to choose between a financially unrewarded extra effort and witnessing human suffering on account of one's failure to make the effort seems to be a very stylized account of what we know as "burnout" in the human service professions.

Put differently, workers do not choose meaningful work at low wages because they have a preferable alternative: shirking at low effort levels in nonmeaningful jobs. Unless the meaningful jobs pay enough to make those jobs, with their high effort levels, preferable to the shirking alternative, no-one will choose them.

Inefficiency in Nash equilibria is a consequence of their noncooperative nature, that is, of the inability of the players to commit themselves to efficiently coordinated strategies. Suppose they could do so -- what then? Suppose, in particular, that the employer could commit herself or himself, at the outset, to pay a high wage, in return for the worker's commitment to choose Job 1. There is no need for an agreement about effort -- of the remaining outcomes, in the upper left corner of the table, the worker will choose high effort and (5,13), because of the "meaningful" nature of the work. This is an efficient outcome.

And that, after all, is the way markets work, isn't it? Workers and employers make reciprocal commitments that balance the advantages to one against the advantages to the other? It is, of course, but there is an ambiguity here about time. There is, of course, no measurement of time in the game example. But commitments to careers are lifetime commitments, and correspondingly, the wage incomes we are talking about must be lifetime incomes. The question then becomes, can employers make credible commitments to pay high lifetime income to workers who choose "meaningful" work with its implicit high effort levels? In the 1960's, it may have seemed so; but in 1995 it seems difficult to believe that the competitive pressures of a profit-oriented economic system will permit employers to make any such credible commitments.

This may be one reason why "meaningful work" has generally been organized through nonprofit agencies. But under present political and economic conditions, even those agencies may be unable to make credible commitments of incomes that can make the worker as well off in a high-effort meaningful job as in a low-effort nonmeaningful one. If this is so, there may be little long-term hope for meaningful work in an economy dominated by the profit system.

Lest I be misunderstood, I do not mean to argue that a state-organized system would do any better. There is an alternative: a system in which worker incomes are among the objectives of enterprises, that is, a cooperative system. It appears to be possible that such a system could generate meaningful work. There is empirical evidence that cooperative enterprises do in fact support higher effort levels than either profit-oriented or state organizations.

Of course, some nonmeaningful work has to be done, and it remains true that when nonmeaningful work is done it is done inefficiently and at a low effort level, that is, at E in the figure. In other words, the fundamental source of inefficiency in this model is the inability of the workers to make a credible commitment to high effort levels. If high effort could somehow be assured, then (depending on bargaining power) a high-effort efficient outcome would become a possibility in the nonmeaningful work subgame, and this in turn would eliminate the worker's incentive to choose nonmeaningful work in order to shirk. (If worker bargaining power should enforce the outcome at C, which is Pareto-optimal, the shirking nonmeaningful strategy would still dominate meaningful work). However, it does seem that it is very difficult to make commitments to high effort levels credible, or enforceable, in the context of profit-oriented enterprises.

It may be, then, the the problem of finding meaningful work and of burn-out in fields of meaningful work is a relatively minor aspect of the far broader question of effort commitment in modern economic systems. Perhaps it will do nevertheless as an example of the application of subgame perfect equilibrium concepts to an issue of considerable interest to many modern university students.

Game Theory: An Introductory Sketch

The Essence of Bankruptcy

Bankruptcy is badly understood in modern economics. This is equally true at the most elementary and most advanced levels, but, of course, the sources of confusion are different in these different contexts.

For the elementary student, there is the tendency to confuse bankruptcy, the decision to shut down production, and "going out of business," that is, liquidation. The undergraduate textbook encourages this, since it considers only the shut-down decision, and the timeless model usual in the undergraduate textbook makes the shut-down decision appear to be an irreversible one. The textbook discussion of the shut-down observes that the business will shut down if it cannot cover its variable costs, and this illustrates a point about opportunity costs -- fixed costs are not considered because they are not opportunity costs in the short run. Bankruptcy occurs when the firm cannot, or will not, cover its debt service payments: quite a different thing. Debt service costs are usually thought of as fixed, not variable costs.

In real businesses, of course, bankruptcy, liquidation, and shut-down are three quite different things that may appear in various combinations or entirely separately. A business may be reorganized under bankruptcy and continue doing business with the former creditors as equity owners -- neither shut down nor liquidated. The business that shuts down may not be bankrupt -- it may continue to make debt service payments out of its cash reserves and resume production when conditions merit. And a company may be liquidated, for example at the death of a proprietor, although it is able to cover its variable costs and its debt service payments (although this will only occur when the transaction costs of finding a buyer are so high as to make sale of the business infeasible).

Small wonder, then, that the undergraduate economics student finds the shut-down analysis a little confusing -- it abstracts from almost everything that matters! But more advanced economists will find bankruptcy confusing for another reason. The reason is related to the phrase "the firm cannot, or will not, cover its debt service payments." We may think of a lending agreement as a solution to a cooperative game, that is, a game in which both players commit themselves at the outset to coordinated strategies. The repayment of debt service is the strategy the firm has committed itself do. For the firm to fail to pay its debt service contradicts the supposition that the firm had, at the first instance, committed itself. And the creditors are letting the firm out of its contract, and they are losing by that, and why should they do it? It seems that we must fall back on the first part of the statement: the firm cannot make its debt service payments. Some unavoidable (but not clearly foreseen) circumstance makes it impossible for the debt service to be paid. We then interpret the debt contract as a commitment to pay "if possible," or with some other such weasel-words, and we understand why the creditor capitulates: she or he has no choice.

But how can it be that "the firm cannot" pay its debt service? We need to make our picture a little more detailed.

First, uncertainty clearly plays a part in it. If bankruptcy were certain, there would be no lending. Accordingly, we represent uncertainty in the usual way in modern economics: we suppose that the world may realize one of two or more states. At the outset, the state of the world is not known. After some decisions and commitments are made, the state of the world is revealed, and some of the decisions and commitments made at the first stage must be reconsidered. Bankruptcy is such a reconsideration of commitments made in ignorance of the state of the world: it occurs only in some states of the world, and the payoff to the lender in the other states is good enough to make the deal acceptable as a whole.

Second, we must be a little more careful about just who "the firm" is, since it is a compound player. Let us adopt the John Bates Clark model of the business enterprise, and of the market economy, as a first approximation. In this model there are capitalists (lenders, for our purposes), suppliers of labor services, that is workers, and "the entrepreneur," who owns nothing and whose services are those of coordination between the other two groups.

With these specifics in mind, let us return to the shut-down decision as it is portrayed in the intermediate microeconomics text. What leads "the firm" to shut down? What happens is that the state of the world realized is a relatively bad one. That is, the conditions for production and/or demand are poor, so that the enterprise is unable to "cover its variable costs." In other words, it is unable to pay the workers enough to keep them in the enterprise. The key point here is that the workers have alternatives. The revenue of the enterprise is so little that, even if the workers get it all, they do not make as much as they would in their best alternatives. Saying "the firm cannot cover its variable costs" is a coded way of saying "the firm cannot recruit labor with its available revenues." In such a case, there is clearly no alternative to shutting down.

But, as we have observed, a firm may go bankrupt but not shut down, instead continuing to produce under reorganized ownership. How would this occur? The state of the world is not quite as bad: the enterprise can earn enough revenue to pay its workers their best alternative wages, but having done that, there is not enough left to pay the debt service. The entrepreneur has only two choices: to cut the wages below the workers' best alternative pay, lose them all, produce nothing, and default on all of the debt service; or to pay the workers at their best alternative, produce something, and pay something toward the debt service. Clearly, the latter is in the interest of the lenders, so they renegotiate the note.1

In all of this, "the entrepreneur" has played a passive role. John Bates Clark's "entrepreneur" is not much of a player, from the point of view of game theory, anyway. His role is to combine capital and labor in such a way as to maximize profits. In effect, he is an automaton whose programmed decisions define the rules of a game between the workers and the bankers. At the point of bankruptcy, his role is even less active. The choices and commitments are made by the substantive players: capitalists and workers. The essence of bankruptcy is a game played between a lender and a group of workers. We may as well eliminate the entrepreneur entirely, and think of the firm as a worker-cooperative.2 From here on, we shall follow that strategy.

To make things more explicit still, let us consider a numerical example. The players are, as before, a banker and a group of workers. If the banker lends and the workers work, the enterprise can produce a revenue that depends on the state of the world. there are three states. The best state is the "normal" one, so we assign it a probability of 0.9. The other two states are bad and worse -- a bankruptcy state and a shut-down state -- with probabilities of 0.05 each. Thus production possibilities are as shown in Table 18-1.

Table 18-1

state

revenue

probability

1

3000

0.9

2

2000

0.05

3

1000

0.05

We suppose that the safe rate of return (opportunity cost of capital) is .01, and that the lender, being profit oriented, offers a loan of 1000 to enable production to take place. The contract rate of interest is 10%; i.e. 1100 has to be paid back at the end of the period. We suppose, also, that the workers can get an alternative pay amounting to 1500.

If the loan is made, the state of the world is revealed, and then the participants reconsider their strategy choices in the light of the new information. Should the bank make the loan? Should the workers' cooperative accept it? We shall have to consider the various outcomes and then apply "backward induction" to get the answer.

What then happens in state 3? The answer is that in state 3, the members of the cooperative all resign in order to take their best alternative opportunities, at 1500>1000, so that the cooperative spontaneously ceases to exist, and the lender gets nothing.

What about state 1? The enterprise revenue is enough to pay the 1100 in debt service, and the workers' income, 1900, is more than their best alternative, so they do stay and produce, and both the bank and the workers' cooperative are better off.

We now turn to the pivotal state 2. Here, there is enough revenue to pay the debt service, but if it is paid, the workers get only 900<1500. In such a case, again, the worker-members of the cooperative will resign, and the cooperative dissolve for lack of members, and the bank will get nothing. On the other hand, if the bank renegotiates for partial repayment of 500 or less, then the workers get 1500 and the cooperative continues. Thus, in this state, the bank renegotiates and earns 500.

The bank's expected repayment thus is

.9(1100) + .05(500) + .05(0) = 1015 > 1010

Thus the bank makes more than its best alternative and will accept the contract. As for the workers in the cooperative, they make a mathematical expectation of

.9(1900) + .05(1500) + .05(1500) = 1860 > 1500

And so they, too, accept the contract. Thus the loan is made, despite a .05 probability of bankruptcy and a .05 probability of outright default.

In many games of this kind one or another player can obtain a better result if he can commit himself credibly at the outset to a strategy which may seem less advantageous, once the state of the world is known and others have made their decisions. Would the bank be better off it if could commit itself not to renegotiate? The answer is that it would not. Its payoffs would be

.9(1100) + .05(0) + .05(0) = 990 < 1010

The lenders would be worse off and, if (for example) statute law forbade them from renegotiating, they would refuse to make the loan!

But what about the workers? It is their desertion that leads the enterprise to be abandoned if the debt service is paid in state 2. What if they could be somehow bound to the firm? Slavery offers one possibility. In a system that permits slavery, "the entrepreneur" might buy slaves instead of hiring free workers. In state of nature 3, "the entrepreneur" would rent out the slave work force for 1500, pay the 1100 debt service, and pocket the profits (assuming the cost of food necessary to keep the slaves productive is less than 400). In state 2, "the entrepreneur" would require the slaves to work in the firm, produce 2000, pay the debt service, and pocket 900 less the cost of their food. The bank would get its debt service in every state (barring slave starvation) and might well prefer to lend to slavemasters rather than worker cooperatives or John Bates Clark style firms.

In the context of the John Bates Clark firm, the desertion of the workers in states 2 and 3 comes as no surprise to us -- the workers are hired by "the entrepreneur" at mutual convenience and are expected to leave whenever it benefits them to do so. In this example, however, the loan is made to a cooperative association of the workers, their own association. If it were made to them individually, they would be no less responsible for it after they had moved on to their other, better-paying jobs. But the obligation to pay the loan has been assumed by a group of workers, as a group, and the group can continue to exist only so long as it is in the interest of the workers as individuals for it to do so. And this does not reflect the constitution of the firm, but the liberal constitution of society, that holds that no agency, even one constructed by the workers, may require a person to work without offering a payment sufficient to get the worker's assent.

And this is the essence of the case for the proprietary or corporate enterprise as well. The proprietor or investor-owned corporation is no more than a middleman between a group of workers and a bank, so far as bankruptcy is concerned. The essence of bankruptcy is a renegotiation of the loan contract between a lender and a group of workers, and laws exempting the creditor from the full amount of the debt, in appropriate circumstances, are laws for the protection of the creditors, not of the debtors.

****************************************************************************************************************

 

Auction Types - Standard


 

Here are the types of auctions you will encounter on this site.

English Auctions

English auctions are probably the most common type. Users bid the highest price they are willing to pay for an item and bidding activity stops when the auction duration is complete. The item is sold to the highest bidder at their bid price.

English auctions also allow the seller to specify a reserve price below which the item will not be sold.

Vickrey Auctions

The Vickrey auction allows for selling single items as does the English Auction. The difference is that the highest bidder obtains the item at the price offered by the second highest bidder. This is a good format because bidders have the incentive to bid what they think the item is worth and not worry about what others will bid.

Dutch Auctions

Dutch auctions are special type of auction designed to handle the case where a seller has a number of identical items to sell. The seller should specify the minimum price (starting bid) and the exact number of items that are available at that price. The bidders bid at or above that minimum price for the number of items that they are interested in buying. At the end of the auction, the highest bidders earn the right to purchase those items at the minimum successful bid.

Here is an example, say there are twenty-five widgets being sold at $75.00 and forty-five bidders bid for one widget each, at $75.00. In this case, only the first twenty-five people will be the bidders who get their product successfully. Since the bid amounts are the same, the earlier bids will take merchandise.

Now, let's say that one of those people bids $100 for one widget. Since his bid is higher than all the others, he will certainly be one of the bidders to get the merchandise.

If bidders continue to bid higher than the starting price enough times, then the final bidding price will increase as well. In another instance, if less than twenty-five people bid in our example "widget" auction, only that number of widgets will be sold at the opening price of $75.00. For the selling price to increase past the opening price specified by the seller there must be a higher or equal level of demand than the supply indicated. In our example, the selling price would only increase if twenty-four or more widgets were bid on, no matter what the amount of each bid.

In a case where a bidder bids for multiple quantities, the bidder who bid the lowest will not always get the merchandise that he/she bid on. If the bidder who made the lowest bid requested a quantity of four widgets, he/she may not even be entitled to one widget. For instance, if 12 other higher bidders each bought two widgets, there would only 1 widget left. In this case, the original bidder would only be entitled to one widget, even though he originally bid for four. The way around this problem is to ensure that you are not the lowest successful bidder. Note: A bid's value in the auction is determined by the total number of items bid on, multiplied by the bid price.

Yankee Auctions

A Yankee Auction is a variation of the Dutch Auction where successful bidders pay what they bid as opposed to paying the price determined by the lowest qualified bidder (as in a Dutch Auction).

Double Auctions

Although not classified as one of the major four auction types, the double auction has been the principal trading format in U.S. financial institutions for over a hundred years.

In this auction both sellers and buyers submit bids which are then ranked highest to lowest to generate demand and supply profiles. From the profiles, the maximum quantity exchanged can be determined by matching selling offers (starting with lowest price and moving up) with demand bids (starting with highest price and moving down). This format allows buyers to make offers and sellers to accept those offers at any particular moment. It can be confusing to think about the double auction in light of overlapping buy and sell orders. A good way to avoid this confusion is to understand that at one single instant of time, they do not overlap.

It works like this: Suppose 4 sellers of foreign exchange offer to sell one unit at prices of 100, 200, 300, and 400 units of domestic currency, and 4 demanders of foreign exchange offer to buy one unit at prices of 400, 300, 250, and 50 units of domestic currency. Supply and demand are met at three units of foreign exchange but the price would remain indeterminate, falling somewhere between 200 and 250. [Feldman]

The origins of the double auction are not well known, but it is recognized that this form of auction has roots that go back to ancient Egypt and Mesopotamia. Almost certainly the double auction stems from "haggling" in which buyer and seller each suggest prices. [Friedman, Daniel]

Much later (in the last quarter of the nineteenth century) when the telegraph and telephone were invented, traders in the stock market could speak directly to interested outside investors. This was viewed at the time not only as a novelty but also as something of a threat. The computer revolution of the twentieth and twenty-first centuries portend far greater shifts as agents and financial markets become automated. [Friedman, Daniel]

A "continuous double auction" is one in which many individual transactions are carried on at a single moment and trading does not stop as each auction is concluded. The pit of the Chicago Commodities market is an example of a continuous double auction and the New York Stock Exchange is another. In those institutions a specialist matches bids and asking prices to find matches.

One interesting variation is the Double Dutch auction. Work on this is being done at the University of Arizona. [McCabe]

It works like this: A buyer price clock starts ticking at a very high price and continues downward. At some point the buyer stops the clock and bids on the unit at a price favorable to him. At this point a seller clock starts upward from a very low price and continues to ascend until stopped by a seller who then offers a unit at that price. Then the buyer clock resumes in a downward direction. The trading period is over when the two prices cross, and at that point all purchases are made at the crossover point.

The DA (double auction) has many variants and is evolving rapidly. Economists believe that the double auction will have many applications as auctions become computerized.

 

Summary

In July 1994 in the ballroom of the Omni Shoreham Hotel in Washington, D.C., a most unusual auction was in progress. No famous paintings, valuable coins, or antique furniture sat on the auction block. For sale was nothing but air: a slice of the electromagnetic spectrum for a new generation of cell phones, pagers, and other wireless communication devices. The U.S. government had never auctioned anything so valuable before, and no one knew just what was going to happen. The Federal Communications Commission (FCC) estimated that the airwave spectrum was worth about $10 billion, but telecommunications industry leaders scoffed at the idea that they would pay anywhere near that sum.

Once bidding launched, however, prices started rising tens of millions of dollars by the hour, to telecom executives’ disbelief and horror. “It felt as if we were playing multi-million-dollar games of poker,” recalls John McMillan, an auction theorist at Stanford University, who helped the FCC run the auction.

That first auction garnered $617 million for just 10 small licenses, and another held in December of that year raised more than $7 billion, breaking all records for the sale of public goods in America and leading the New York Times to hail it as “the greatest auction ever.” By early 2001 the spectrum auctions had brought in $42 billion, with more licenses still to be sold.

But things could have turned out differently. To make sure the auctions would go smoothly the government invested a lot of effort in preparing the rules of the auctions, and it paid off.

Designing the auction rules was a problem of great complexity. The FCC had divided the spectrum into thousands of licenses. Should it auction them all at once or one at a time? Should it use an open bidding format or collect sealed bids? Could it choose rules that would ensure that the licenses went to firms that would use them quickly and efficiently? Could it avoid loopholes firms could exploit, as well as prevent companies from colluding with each other to keep prices low?

To attack these questions the FCC turned to experts in the mathematical field of game theory, which figures out which strategies work best in a competitive situation. Over the decades economists had used game theory to develop a detailed picture of how bidders would behave in different types of auctions. Now the theoretical picture was put to the test, and it passed with flying colors.

The U.S. spectrum auctions have been imitated globally to sell a wide range of goods and services, including electric power, timber, and even pollution reduction contracts. Most of these auctions have been great successes. A few, in which the designers failed to heed the lessons of game theory, have been dismal flops.

The founders of game theory could never have dreamed that by the end of 2001, auctions designed using the principles of game theory would have raised more than $100 billion worldwide. Game theory, which started out in the 1920s as basic research into strategies of such parlor games as poker, has become very big business indeed. (1)

(1) The source for the material on the FCC auctions was John McMillan’s “Reinventing the Bazaar,” W. W. Norton & Company, New York, 2002.

 

The Rules of the Game

More than 70 years ago mathematicians started realizing that analyzing simple parlor games could illuminate many situations in which people compete with one another and have to decide what strategy to adopt. The principles they discovered have shed light on subjects from how nations interact in a nuclear arms race to why some organisms cooperate with one another. And in one of its most striking successes, game theory has led to a revolution in the way economists understand auctions.

The renowned Hungarian mathematician John von Neumann, a lecturer at the University of Berlin at the time, launched the field in 1928. He was curious about how game players should choose their strategies: When, for instance, should a poker player bluff? He studied two-player “zero-sum” games, such as chess and tic-tac-toe, in which the players’ interests are entirely at odds: in the simplest manifestation, one player’s gain is the other player’s loss. As any child knows, in tic-tac-toe both players can avoid losing; if they each follow their best strategies, they force the game to end in a draw. Von Neumann proved that in any two-player zero-sum game, not just in tic-tac-toe, there is a certain “right” outcome, in the sense that neither player can reasonably expect any better outcome unless the other player makes a mistake. This implies, for example, that if two chess players follow their best strategies, the game will always have the same outcome. Luckily for the excitement of the game, however, no one has ever figured out what that outcome is—a win for white, a win for black, or a draw?

Von Neumann and economist Oskar Morgenstern of Princeton University became convinced game theory would illuminate economic questions, and in 1944 they published a book, The Theory of Games and Economic Behavior, arguing that point. At the time, the prevailing approach to economics was to look at how each individual responds to the market as a whole, not how individuals interact with each other. Game theory, von Neumann and Morgenstern argued, would give economists a way to investigate how each player’s actions influence those of the others.

Von Neumann and Morgenstern’s book analyzed zero-sum games and cooperative games, in which players can form coalitions before the game starts. But many economic interactions don’t fall into either of those categories; for instance, von Neumann and Morgenstern’s cooperative framework doesn’t apply to situations in which the players have valuable secrets to preserve. For that reason, although cooperative game theory was useful for studying certain economic questions, such as problems of supply and demand, it was less useful for such subjects as auctions.

In the late 1940s mathematician John Nash, then a young graduate student at Princeton, realized that in any finite game—not just a zero-sum game—there is always a way for players to choose their strategies so that none will wish they had done something else. In 1949 he wrote a two-page paper whose ideas would change forever how economics research is pursued. Nash came up with the notion of a “strategic equilibrium”: a collection of strategies, one for each player, such that if all the players follow these strategies, no individual player has an incentive to switch to a different strategy. In the setting of two-player zero-sum games, Nash’s equilibrium gives exactly the same solution as von Neumann’s analysis. But Nash’s concept goes far beyond this scenario: He proved that even non-zero-sum games and games with more than two players must have at least one equilibrium.

Consider, for example, a three-person “duel” in which Alex, Barbara, and Chris will fire simultaneous gunshots at each other once every minute. Alex and Barbara are sharp-shooters who hit their target 99 out of 100 times. Chris, however, only makes his shot 30 percent of the time. Surprisingly, if all the players follow their equilibrium strategies, Chris is the most likely to survive! Alex and Barbara’s equilibrium strategy is to fire first at each other, since it is in their best interest to kill their most dangerous opponent first. The most likely outcome is that Alex and Barbara will kill each other on the first shot, and Chris will escape unharmed.

In some games the Nash equilibrium predicts an even more counterintuitive outcome. Imagine, for example, that you belong to a criminal gang, and you and one of your accomplices have been caught. The police don’t have enough evidence to convict you, and if you both stay silent then the best they can do is convict you on a lesser charge with a one-year prison sentence. The police offer you a deal: If you squeal on your accomplice, they'll let you off with a half-year sentence, while your hapless accomplice will get 10 years. But you know that in the next cell over, the police are making the same offer to your accomplice, and if you both rat on each other then you’ll each spend seven years inside.

In this famous “Prisoner’s Dilemma” game you’re better off if both of you stay silent than if both of you squeal. But that's not what will happen: Staying faithful to each other is not a Nash equilibrium, since you can improve your lot by squealing. The only Nash equilibrium is for both of you to squeal. In fact, squealing is what is known as a dominant strategy: It is the best thing for each of you to do, no matter what the other player does. Assuming you are both motivated by pure self-interest, you are inexorably driven toward seven-year sentences, while by cooperating you could have gotten one-year sentences.

Nash’s equilibrium concept gives economists a precise mathematical approach to analyzing how people will behave in competitive situations. But, perhaps because of its very simplicity, for a couple of decades after Nash wrote about the equilibrium, most economists didn't realize just what a powerful tool he had handed them. Even Nash’s dissertation advisor thought Nash's theorem was an elegant result, but not a particularly useful one.

Part of the reason many economists didn’t immediately see the value of Nash’s equilibrium concept was that in Nash’s formulation, each player knows ahead of time what payoffs the other players will earn from the different possible outcomes. But in many economic interactions this is not the case. In an auction, for instance, a bidder generally doesn’t know how much the other bidders value the item being sold, making it harder to guess their strategies.

In 1967 game theorist John Harsanyi of the University of California, Berkeley, developed a method to do Nash equilibrium analyses even when players have incomplete information about each other's values. Twenty-seven years later Nash and Harsanyi shared the Nobel Memorial Prize in Economics with another game theorist Reinhard Selten, of the University of Bonn in Germany.

With these ideas in hand, more and more economists started feeling that game theory might have some important things to say about their field. Auctions, whose precise rules make them akin to games, seemed like a natural testing ground for the theory. Researchers interested in auctions began to roll up their sleeves.

 

Which Auction Is Best?

When economists began to turn the power of game theory on auctions, they started noticing that one economist, William Vickrey of Columbia University in New York, had already used game theory to analyze auctions several years before Harsanyi developed his theory. Vickrey’s brilliant study of auction strategies was ahead of its time: Written in 1961 when economists were only starting to get a sense of game theory’s importance, it was relegated to an obscure journal and overlooked for years. Today, however, it is seen as the pioneering paper in the field of auction theory.

Vickrey, who earned the Nobel Memorial Prize in Economics in 1996 partly for his work on auction theory, studied what economists call “private value” auctions, in which each bidder’s value for the item for sale is independent of the values of the other bidders. For instance, if a Rembrandt painting is being auctioned and you want to buy it simply because you like it, then knowing how much your rivals value it won’t affect how much you value it yourself. Vickrey compared three of the most common auctions (English, Dutch, and sealed first-price auctions) and designed a fourth with some surprising properties.

An English auction is the familiar “going, going, gone” auction of such art houses as Sotheby’s and Christie’s, in which the price goes up until only one bidder remains. In a Dutch auction the price starts out high and drops until someone is willing to pay that price. In a first-price auction, participants submit sealed bids and the highest bidder wins, paying her bid. To these auctions Vickrey added what became known as the second-price auction, in which participants submit sealed bids and the highest bidder wins, but pays only as much as the second-highest bid.

Why would anyone use such an arbitrary-sounding rule? Although Vickrey’s auction seems the least natural of the four, it is the one with the simplest optimal bidding strategy: Just bid the amount at which you value the object.

Suppose, for instance, you’re willing to pay up to $100 for an antique doll. What will happen if you bid less than $100, say $90? If the highest rival bid is $80, you’ll win and pay $80; but the same thing would have happened if you had bid $100. If the highest rival bid is $120, you'll lose; and again the same thing would have happened if you had bid $100. But if the highest rival bid is $95, you’ll lose the auction, whereas if you had bid $100 you would have won the doll for $95. So bidding $90 never improves your situation, and sometimes makes you lose an auction you would have liked to win. In a similar way bidding more than $100 never improves your situation, and sometimes makes you win an auction you would have liked to lose. In a second-price auction, honesty is the best policy.

You might wonder, though, why a seller would ever use a second-price auction. Why should she let the winner pay the second-highest bid when she could make the winner pay the highest bid? Astonishingly, Vickrey proved that in a wide class of situations, the seller can expect the same amount of money regardless of which of the four auctions she uses. In 1981, game theorist Roger Myerson of the University of Chicago extended Vickrey’s result to show that all auctions bring in the same expected revenue, provided they award the item to the bidder who values it most, and provided the bidder who values it least doesn’t pay or receive any money, as would happen if there were a fee or reward simply for entering the auction.

It’s easy to see that an English auction produces the same revenue as a second-price auction: An English auction ends precisely when the second-highest bidder drops out (although in some English auctions bidders must raise the high bid by some definite increment, in which case the winner pays marginally more than the second-highest bid). The Dutch auction and the first-price auction are also equivalent to each other, since in a Dutch auction, the prize goes to the bidder willing to bid highest, and she pays what she bids.

But why doesn’t a first-price auction bring in more money than a second-price auction? The reason is that in a first-price auction, it doesn’t pay to bid honestly. If you bid $90 for the antique doll, and the second-highest bid is $80, then you’ll win the doll for $90. If you had bid $100, you would have won but paid more. So in a first-price auction the best strategy is to bid less than your value for the item—what auction theorists call “shading” your bid.

Vickrey figured out how much bidders should shade their bids by looking for the Nash equilibrium strategy. This best strategy varies depending on the circumstances of the auction—for instance, the more bidders in the auction, the less each bidder should shade his bid, since there is less room between the highest bidder’s value and the second-highest bidder’s value. But Vickrey found that no matter what the number of bidders, the shaded bids mean the seller takes home only as much money as in a second-price auction.

Vickrey and Myerson’s work would seem to be the end of the story. All auctions bring in the same revenue, and the second-price auction has the easiest strategy. So it seems that auctioneers should just always use second-price auctions.

But it’s not that simple. Vickrey’s work laid the foundations of auction theory, but it didn’t answer all the questions. His work didn’t cover auctions in which the bidder who values the item most doesn’t necessarily win it—for instance, auctions that give preference to disadvantaged bidders (such as small businesses bidding against huge corporations), or auctions in which the seller sets a reserve price below which no one will win the item at all. What’s more, Vickrey assumed bidders have private values—knowing how their rivals value the item wouldn’t change how they value it themselves. But in most auctions, bidders’ values influence each other in subtle ways. Even in an art auction, in which many collectors are motivated purely by how much they like the work, some bidders may be dealers. If they find out, for instance, that a savvy dealer values the item highly, they are more likely to value it highly themselves.

Understanding situations in which bidders care about the market value of an object, not just how much they like it, gave economists plenty to do in the next few decades after Vickrey’s work. The result would turn out to shed a crucial light on a wide range of auction environments, from government sales of oil drilling leases to airwave spectrum auctions.

 

The Winner’s Curse

 

In 1971 three employees of the petroleum giant ARCO (Edward Capen, Robert Clapp, and William Campbell) noticed something odd. Oil companies bidding for offshore drilling rights in the U.S. government’s first-price auctions seemed to be suffering unexpectedly low rates of return on their investments, often finding much less oil underground than they had hoped. Why did the oil companies—which on average are pretty good at guessing how much oil lies buried in a tract—seem so often to pay more than the tract turned out to be worth?

As an analogy, imagine that a jar of nickels is being sold in a sealed first-price auction. The jar holds $10 in nickels, but none of the bidders know that; all they can see is how big the jar is. The players independently estimate how much the jar is worth. Maybe Alice guesses right, while Bob and Charlie guess the jar holds $8 and $12, respectively. Diane and Ethel are farther off, putting the value at $6 and $14, respectively.

If all the bidders bid what they think the jar is worth, Ethel will win, but she’ll pay $14 for $10 in nickels—what economists call the “winner’s curse.” Even if the jar is sold in a second-price auction, she will still overpay. Although on average the bidders are correct about how much money is in the jar, the winner is far from correct; she is the one who has overestimated the value the most. In 1983 economists Max Bazerman and William Samuelson, then at Boston University, performed an experiment in which M.B.A. students bid on a nickel jar in a first-price auction; on average the winner paid 25 percent more than the jar was actually worth.

To protect themselves from the winner’s curse bidders must follow an odd logic. In any auction presumably some people will overestimate the value of the item. If everyone bids what they think the item is worth, the person with the highest overestimate will win and pay too much for the item. So the safe strategy for each bidder is to assume she has overestimated, and lower her bid somewhat. If she really has overestimated, this strategy will bring her bid more in line with the actual value of the item. If she has not really overestimated, lowering her bid may hurt her chances of winning the auction; but it’s worth taking this risk to avoid the winner’s curse. This reasoning applies not just to bidders for jars of nickels but also to oil companies bidding for drilling rights, baseball managers bidding for players’ contracts, dealers bidding for paintings, and bidders in any situation where the item has some intrinsic value about which the bidders are uncertain— what economists call “common value” settings.

In the late 1960s economist Robert Wilson of Stanford University decided that game theory was the way to understand common value auctions, and he convinced many of his students and colleagues to think the same. Wilson used the Nash equilibrium to figure out just how much bidders should subtract from their value estimate to provide a good safety net against the winner’s curse. Again, the optimal strategy depends partly on the number of bidders. But in this case the more bidders in the auction, the more each bidder should lower her bid, because if there are many bidders, the distribution of their value estimates is probably very spread out, with the most optimistic bidder greatly overestimating the value of the item.

In common value settings the four standard auctions are not all created equal. In 1982 auction theorists Paul Milgrom (a former student of Wilson) of Stanford University and Robert Weber of Northwestern University showed that an open English auction usually raises the most revenue—the reason roughly being that because each bidder can see how high the others are going, she will be less afraid she has overestimated and will bid more aggressively.

 

Bidding Across the Spectrum

By the early 1990s economists had used game theory to analyze bidding strategies for a wide range of situations, including hundred-million-dollar oil lease auctions. But the idea of using game theory to design the rules of the auction itself remained very much theoretical science. In 1993 that suddenly changed.

In August of that year the U.S. Congress told the Federal Communications Commission to experiment with auctioning spectrum licenses for wireless communications services. The FCC’s previous method of distributing licenses–just giving them away–had long been a bone of contention.

In the early days of spectrum licensing, the FCC had decided which firms should get licenses by holding hearings. But by the early 1980s so many firms were applying for licenses that the system ground to a halt. In 1982 the FCC decided to start awarding licenses by lottery, figuring that telecommunications companies could sort things out afterwards by selling each other licenses. But the FCC didn’t put any restrictions on who could participate in the lotteries, with embarrassing and outrageous consequences: One year, for instance, a group of dentists won a license to run cellular phones on Cape Cod, then promptly sold it to Southwestern Bell for $41 million. Even worse, it took telecommunications companies years to shuffle and reshuffle the licenses into the right hands, which is one of the reasons that Europe got cell phone service so much sooner than the United States.

Congress wanted an easy method to assign the licenses directly to the companies that would use them best. And having witnessed the sums of money companies were paying one another for the licenses, it wanted a share of the loot. Auctions, which tend to award the prize to the bidder who values it most and to extract a lot of money along the way, seemed like the way to go.

In October 1993 the FCC invited the telecommunications industry to submit proposals for how to structure the auction, publishing a preliminary report that contained footnotes to many of the important papers of auction theory. Telecom companies, most of which knew little or nothing about auction theory, started scooping up the authors of the papers as consultants. Auction theorists were suddenly a hot commodity.

The FCC had more than 2,500 licenses to disperse. Traditionally, when many items are up for auction, auctioneers sell them one at a time. But spectrum licenses, unlike rare coins or paintings, are not independent of each other: One company might want a northern California license only if it can also get a southern California license, for instance. If the licenses were auctioned one at a time, with the northern California license coming up first, a company that wanted both wouldn’t know how high to value the northern license, since it wouldn't know what its chances were of getting the southern license later. This would create the risk that some licenses would fail to be won by the bidders who needed them most. And because bidders would have such incomplete information about the value of the licenses, they would bid cautiously to avoid the winner's curse. On the advice of game theorists the FCC decided to auction the licenses in one fell swoop, in spite of the challenges of running such a complicated auction.

The FCC also had to decide which auction type to use: sealed or open bids, first price or second? Milgrom and Weber’s research suggested that an open English auction would raise the most revenue, since it would allow bidders to gather the most information and make them bid most confidently. The FCC decided to follow that advice, with a slight twist: In each round of the auction the bidders placed bids secretly in enclosed booths; the FCC then announced the new high price without saying who had bid it. Masking the bidders’ identities in this way lessened their ability to engage in retaliatory bidding against each other or in collusion to keep prices down.

The final design, based on proposals by Milgrom, Wilson, and auction theorist Preston McAfee of the University of Texas, Austin, was a spectacular success. Not only did it raise more money than anyone anticipated but it also succeeded in Congress’s primary goal: to award the licenses to companies that would use them efficiently. Within two years of the first spectrum auctions, wireless phones based on the new technology were on the market.

 

Future Directions

Sometimes the main contribution of game theory to auction design is not some deep theorem but simply the idea that it is vital for auction designers and bidders to put themselves into the minds of their opponents. In recent years several disastrous auctions have shown that when an auction is poorly designed, bidders will exploit the rules in ways the auction’s creators didn’t anticipate.

For instance, in 2000, Turkey auctioned two telecom licenses one after another, with the stipulation that the selling price of the first license would be the reserve price for the second license—the minimum price they would accept for it. One company bid an enormous price for the first license, figuring that no one would be willing to pay that much for the second license, which did in fact go unsold. The company thus gained a monopoly, making its license very valuable indeed.

Sometimes bidders find sneaky ways to encode messages in their bids. In 1999 Germany sold 10 blocks of spectrum in an English auction with just two powerhouse bidders: Mannesman and T-Mobile. The auction rules stated that bidders placing new bids always had to raise the current high bid by at least 10 percent. In the first round Mannesman bid 18.18 million Deutsch marks per unit on blocks 1-5 and 20 million on blocks 6-10. T-Mobile noticed, as did many observers, that adding 10 percent to 18.18 million brings it almost exactly to 20 million. T-Mobile read Mannesman’s bid to mean, “If you raise our bid on blocks 1-5 to 20 million and leave blocks 6-10 for us, we won’t get into a bidding war with you.” T-Mobile did just that, and the two companies happily divided the spoils.

Figuring out how to prevent such abuses is keeping auction theorists busy. And many other, more specific questions about auction design remain unanswered. Some auction theorists, such as Lawrence Ausubel of the University of Maryland, College Park, are trying to understand how to structure auctions in which many identical items are being sold, to prevent bidders from keeping prices low simply by reducing their demand. Others, such as Paul Klemperer of Oxford University, who helped design the hugely successful British spectrum auction of 2000, are tackling the question of designing auctions with few potential bidders, with the aim of attracting as many competitors into the bidding as possible. A disastrous spectrum auction in November 2000 in Switzerland, in which exactly four strong bidders were bidding for four licenses in an open English auction, highlighted the importance of this problem. Not surprisingly, the bidders got the licenses for a steal, paying less than one-thirtieth the price companies had paid for similar licenses in Britain and Germany just months earlier.

The United States electromagnetic spectrum auctions have given theorists something new to mull over: package bidding. Designing auction rules so that a company can place a single bid for a package consisting of both the northern and southern California licenses would eliminate the chance of the firm getting stuck with one and not the other. This would allow bidders to form more efficient bundles of licenses and make them bid more confidently (and hopefully, higher). But running an auction with package bidding is immensely complicated. If the buyers are all bidding on different packages, how does the auctioneer even decide which are the highest bids in each round? These are thorny issues, but auction theorists are starting to make headway. Milgrom and Ausubel have been working with the FCC to develop package auction designs, and the FCC plans to run a package auction in the near future.

With the advent of online auction services such as eBay, auctions have made their way not only into multi-billion-dollar government sales but also into the daily lives of ordinary people. Observations of these auctions are generating fresh questions. Why, for instance, do many eBay bidders wait until the final seconds of an auction before bidding?

Problems such as these are giving auction theorists a wealth of fascinating new puzzles to sharpen their insight. They will be able to draw upon the wealth of basic research into game theory and its application to auctions. The founders of game theory would surely have approved. Figuring out ingenious strategies and counter-strategies is, after all, the name of the game.

 

*****************“The Bidding Game” was written by science writer Erica Klarreich************************


***************Auction ******************

 

What exactly is an auction? You've seen auctions in movies, you've read about them, you've probably participated and believe nothing could be more simple, right? Someone bids, the price goes up, someone else bids, and when everyone is silent, the object is sold. Well . . . sometimes.

More to the point, why does it matter? Perhaps because auctions are a multi-billion dollar business, and the mortgage rate you pay is determined in part by auctions held by the U.S. Treasury. Perhaps because that particular auction is not the kind of auction you think it is. Perhaps because some bidders spend years mastering strategies that enable them to exploit the misunderstandings of the unwary. And perhaps because smart auction principles can be used to buy and sell computer resources in a way that substantially optimizes human time and money. However, before all that, what exactly is an auction?

Even the term auction (root, "auctio", means increase) is a misnomer because not all auctions have ascending price schemes. In fact, there are many different auction formats including the familiar ascending bid, but also including the descending, sealed-bid, simultaneous, handshake, and whispered forms of bidding. Many of the more unusual formats have been practiced for hundreds of years including one variety in which huge estates have been sold during the time it took for a single one-inch candle stub to flicker out.

Auctions are useful when selling a commodity of undetermined quality. Banks compete for loan customers of uncertain risk, graduate schools compete for students of unknown ability, [Vincent] wine merchants may not have tasted the wares. Auctions can be used for single items such as a work of art and for multiple units of a homogeneous item such as gold or Treasury securities. For countries changing from centrally-planned to market-based economies, auctions offer an ability to value goods that might not otherwise be available. They are useful in circumstances wherein the goods do not have a fixed or determined market value, in other words, when a seller is unsure of the price he can get.

Choosing to sell an item by auctioning it off is more flexible than setting a fixed price and less time-consuming and expensive than negotiating a price (such as happens in a car lot). In a price negotiation, each bid and counter-bid is considered separately, but in an auction the competing bids are offered almost simultaneously.

In fact, an auctionable resource can be nearly anything--public land, livestock, wine, flowers, fish, cars, construction contracts, equity shares, or contracts in the game of bridge. The common denominator is that the value of each item varies enough to preclude direct and absolute pricing. In one fascinating experiment, external offices (with prized windows and locations) were auctioned off as a way of solving the quandary of how to allocate physical resources at Arizona State University's College of Business without infuriating the entire staff. [Boyes]

Simply stated, an auction is a method of allocating scarce goods, a method that is based upon competition. It is the purest of markets: a seller wishes to obtain as much money as possible, and a buyer wants to pay as little as necessary. An auction offers the advantage of simplicity in determining market-based prices. It is efficient in the sense that an auction usually ensures that resources accrue to those who value them most highly and ensures also that sellers receive the collective assessment of the value. (In later chapters, we will see that sellers do not necessarily receive maximum value in the ascending-bid format. [Varian]) What is unique about the auction is that the price is set not by the seller, but by the bidders.

On the other hand, it is the seller who sets the rules by choosing the type of auction to be used.

One oddity regularly occurs in the wine auction market. [Ashenfelter] It is commonly understood in wine circles that when identical lots of wine are peddled at the same auction, later lots are frequently sold for a lower price than early lots. Auctioneers know this but are reluctant to reveal this information to inexperienced participants because such bidders often conclude that the auction house is dishonest. Thus, auctioneers have learned to disguise this anomaly by offering small lots of wine A before offering larger lots of wine A. People assume the reason for the price difference comes from a quantity discount, and so they pay no attention. In fact, the difference is real.

An auction is unusual also in that, unlike other methods of selling, generally the auctioneer doesn't own the goods, but acts rather, as an agent for someone who does. Frequently, the buyers know more than the seller about the value of the item. A seller, not wanting to suggest a price first out of fear that his ignorance will prove costly, holds an auction to extract information he might not otherwise realize.

There are different ways to classify auctions. There are open auctions as well as sealed-bid auctions. There are auctions where the price ascends and auctions where the price drops at regular intervals. Generally, experts agree that there are four major one-sided auction formats: English, Dutch, First-Price sealed-bid, and Vickrey (uniform second-price). One difficulty is the lack of commonality in naming conventions. What some people call a uniform second-price auction is known in financial communities as a Dutch auction, and no end of confusion results.

Which auction is best? The answer depends upon many variables. A seller's perspective is different from that of a buyer. Some auctions types decrease the incentives to cheat while others provide ample room for mischief. Sometimes speed is important. If you are selling flowers or fresh fish or anything that has to get to market quickly, an auction that takes weeks or even hours is not a good solution. In some auctions the buyer must be present, and that is sub-optimal if the auction is in New York and you are in Tokyo. Different circumstances dictate different answers.

Sometimes an auction is useful in hindering dishonest dealings. If the mayor of New York were free to accept the first bid made by a contractor on a new city building, the contractor would probably be a relative and the taxpayers would lose money (again).

Are there drawbacks to auctions? Of course. "Winners curse" is widely recognized as being that phenomenon when a "lucky" winner pays more for an item than it is worth. Auction winners are faced with the sudden realization that their valuation of an object is higher than that of anyone else.

In auctions in which no bidder is sure of the worth of the good being auctioned, the winner is the bidder who made the highest guess. If bidders have reasonable information about the worth of the item, then the average of all the guesses is likely to be correct. The winner, however, offered the bid furthest from the actual value. [Thaler] (Actually, winner's curse is everywhere in subtle forms. Do you really want to hire the employee who has been passed over by other employers? Do you want to be the publisher who buys a manuscript that other editors have rejected?)

All in all, the auction, though not always as simple as it appears, can be thought of as a pure marketplace at work in its finest form.

***********************English Auction********************

An important observation must be made before discussing the various auction formats and that is that people generally have one of two motivations for participating in an auction of any type. The first reason is when a bidder wishes to acquire goods for personal consumption (wine or fresh flowers), and in this case the bidder makes his own private valuation of the item for sale. All bidders have private valuations and tend to keep that information private. There would be little point in an auction if the seller knew already how much the highest valuation of an object will be.

The second reason for bidding in an auction is to acquire items for resale or commercial use. In this case, an individual bid is predicated not only upon a private valuation reached independently, but also upon an estimate of future valuations of later buyers. Each bidder of this type tries (using the same measurements) to guess the ultimate price of the item. In other words, the item is really worth the same to all, but the exact amount is unknown. This is called a common-value assumption, and one example is that of art purchased solely for promotion in some secondary market. Purchasing land for its mineral rights is another example. Each bidder has different information and a different valuation, but each must guess what price the land might ultimately bring.

People's bidding behavior changes depending upon which motivation is driving them.

William Vickrey [Vickrey] established the basic taxonomy of auctions based upon the order in which prices are quoted and the manner in which bids are tendered. He established four major (one sided) auction types.

The English auction is the format most familiar to Americans and is known also as the open-outcry auction or the ascending-price auction. It is used commonly to sell art, wine and numerous other goods.

Paul Milgrom [Milgrom-1] defines the English auction in the following way. "Here the auctioneer begins with the lowest acceptable price--the reserve price-- and proceeds to solicit successively higher bids from the customers until no one will increase the bid. The item is 'knocked down' (sold) to the highest bidder."

Contrary to popular belief, not all goods at an auction are actually knocked down. In some cases, when a reserve price is not met, the item is not sold. (In other instances discussed later, an item is not really sold because a shill from the auction house has accidentally bought it.) Some states require the auctioneer to state at the conclusion of bidding whether or not the item has been sold.

Sometimes the auctioneer will maintain secrecy about the reserve price, and he must start the bidding without revealing the lowest acceptable price. One possible explanation for the secrecy is to thwart rings (subsets of bidders who have banded together and agree not to outbid each other, thus effectively lowering the winning bid).

Despite its seeming simplicity, this auction format is quite complex. Often bids are not made aloud, but rather signaled--tugging the ear, raising a bidding paddle, etc. This system of signaling has several advantages. First, an auction hall would be bedlam if voices were required. Audible bids increase the likelihood of error because there may be more than one person bidding at a single instant and an auctioneer cannot be expected to hear them all.

Many traders prefer the semi-anonymity--a known expert in a certain field may not want others to know he is bidding because it would probably increase the bidding interest. When a decision to accept signals is made, a system of price intervals must be introduced so that seller and buyer understand the signals. In certain situations, an auctioneer has wide discretion. In America the auctioneer often calls out the amount he has in hand and the amount he is seeking as well. In England, however, often the auctioneer does not lead bidders this way, but rather waits to be told what a bidder will offer.

Adding to the complexity, competition is at its highest in the English auction, with some bidders becoming carried away with enthusiasm. Winner's curse ( paying more for an item than its value) is widespread in this type of auction because inexperienced participants bid up the price.

One variation on the open-outcry auction is the open-exit auction in which the prices rise continuously, but players must publicly announce that they are dropping out when the price is too high. Once a bidder has dropped out, he may not reenter. This variation provides more information about the valuations (common or public) of others than when players can drop out secretly (and sometimes even reenter later).

In another variation, an auctioneer calls out each asking price and bidders lift a paddle to indicate a willingness to pay that amount. Then the auctioneer calls out another price etc.

In the ascending-bid format, the auctioneer can exert great influence. He can manipulate bidders with his voice, his tone, and his personality. He can increase the pace or even refuse to notice certain bidders (for example, if he believes someone is a member of a ring, the auctioneer might choose to ignore him).

Eric Rasmusen [Rasmusen] mentions one unusual variation on the English auction that occurs in France. After the last bid of an open-cry art auction, a representative of the Louvre has the right to raise his hand and say, "préemption de l' état" and take the painting at the highest price. It might be noted that in France, the auction privilege (the right to conduct an auction ) is sold to a select few individuals (some 500 throughout the country) by the central government. This privilege is called the chargé.

The key to any successful auction (from a seller's point of view) is the effect of competition on the potential buyers. In an English auction, the underbidder usually forces the bid up by one small step at a time. Often a successful bidder acquires an object for considerably less than his maximum valuation simply because he need only increase each bid by a small increment. In other words, the seller does not necessarily receive maximum value, and other auction types may be superior to the English auction for this reason (at least from the seller's perspective). [Varian]

Another disadvantage to the English system is that a buyer must be present which may be difficult and/or expensive. Finally, this auction type is highly susceptible to rings.

****************Dutch Auction*****************

The descending-price auction, commonly known in academic literature as the Dutch auction, uses an open format rather than a sealed-bid method. It is the technique used in Netherlands to auction produce and flowers (hence, a "Dutch" auction). Unfortunately, the financial world has chosen to refer to another type of auction as the Dutch auction. In the financial world, the auction known as "Dutch" is what is referred to in the academic world as a uniform, second-price auction. Great confusion results. In this series of articles, the "Dutch" auction will mean a descending-bid structure.

In a Dutch auction, bidding starts at an extremely high price and is progressively lowered until a buyer claims an item by calling "mine", or by pressing a button that stops an automatic clock. When multiple units are auctioned, normally more takers press the button as price declines. In other words, the first winner takes his prize and pays his price and later winners pay less. When the goods are exhausted, the bidding is over.

Dutch auctions have been used to finance credit in Rumania and for foreign exchange in Bolivia, Jamaica, Zambia and have also been used to sell fish in England and in Israel.

Dutch auctions are common in less obvious forms. Filene's, a large store in Boston, keeps in its basement a variety of marked-down goods, each with a price and date attached. The price paid at the register is the price on the tag minus a discount that depends upon how long ago the item was tagged. As time passes and the item remains unsold, the discount rises from 10 to as high as 70 percent.

It is believed that the English system may be inferior to Dutch in one area. The key to any successful auction (from the seller's point of view) is the effect of competition on the potential buyers, and in an English auction, the underbidder usually forces the bid up by one small step. The winner may end up paying well under his valuation and thus the seller does not receive the maximum price.

However, in the Dutch system, if the bidder with the highest interest really wants an item, he cannot afford to wait too long to enter his bid. That means he might bid at or near his highest valuation.

Characteristics of Different Types of Auctions

Type

Rules

English, or ascending-price. Open.

Seller announces reserve price or some low opening bid. Bidding increases progressively until demand falls. Winning bidder pays highest valuation. Bidder may re-assess evaluation during auction.

Dutch, or descending-price. Open.

Seller announces very high opening bid. Bid is lowered progressively until demand rises to match supply.

First Price Sealed Bid Auction

The third auction type considered here has a primary characteristic of being sealed (not open-outcry like the English or Dutch varieties) and thus hidden from other bidders. A winning bidder pays exactly the amount he bid. Usually, (but not always) each participant is allowed one bid which means that bid preparation is especially important. To confuse matters the financial community refers to this type of auction as an English auction (except in Great Britain where it is known as the American auction!). In these articles we will use the academic name rather than that used in financial circles.

Speaking generally, a sealed-bid format has two distinct parts--a bidding period in which participants submit their bids, and a resolution phase in which the bids are opened and the winner determined (sometimes the winner is not announced).

An important distinction must be made as to quantity--how many goods are being auctioned--one or multiple items. The name "first-price" comes from the fact that the award is made at the highest offer when a single unit is sold. When multiple units are being auctioned, it is called "discriminatory" because not all winning bidders pay the same amount.

It works like this: In a first-price auction (one unit up for sale) each bidder submits one bid in ignorance of all other bids. The highest bidder wins and pays the amount he bid. In a "discriminatory (more than one unit for sale) auction", sealed bids are sorted from high to low, and items awarded at highest bid price until the supply is exhausted. The most important point to remember is that winning bidders can (and usually do) pay different prices.

From a bidder's point of view, a high bid raises the probability of winning but lowers the profit if the bidder is victorious. A good strategy is to shade a bid downward closer to market consensus, a strategy that also helps to avoid winner's curse.

This type of auction is used for refinancing credit and foreign exchange. Up until 1993, the U.S. Treasury used the discriminatory auction to sell off its debt, but this method is not without its detractors. In the case of U.S. Treasury securities, Milton Friedman warned early on that the discriminatory auction was susceptible to collusion. An investor is reluctant to expose his valuation to the Treasury because the stated intention of the Treasury is to gain the highest price possible. It is advantageous to a bidder to gather information about a competitor's valuation before the auction. Milton Friedman proved to be prophetic. The U.S. Treasury securities auction will be discussed later in greater detail.

Characteristics of Different Types of Auctions

Type

Rules

English, or ascending-price. Open.

Seller announces reserve price or some low opening bid. Bidding increases progressively until demand falls. Winning bidder pays highest valuation. Bidder may re-assess evaluation during auction.

Dutch, or descending-price. Open.

Seller announces very high opening bid. Bid is lowered progressively until demand rises to match supply.

First-price, sealed bid. Known as discriminatory auction when multiple items are being auctions.

Bids submitted in written form with no knowledge of bids of others. Winner pays the exact amound he bid.

Vickrey Auction

The uniform second-price auction is commonly called the Vickrey auction, named after William Vickrey, [Vickrey] winner of the 1996 Nobel Prize in Economic Sciences, who classified it in the 1960s. Like the first-price auction, the bids are sealed, and each bidder is ignorant of other bids. (In the financial community, the uniform, second-price auction is called the Dutch auction, but in these papers we will use the academic names.)

The item is awarded to highest bidder at a price equal to the second-highest bid (or highest unsuccessful bid). In other words, a winner pays less than the highest bid. If, for example, bidder A bids $10, bidder B bids $15, and bidder C offers $20, bidder C would win, however he would only pay the price of the second-highest bid, namely $15.

There is one interesting and crucial point and that is that when auctioning multiple units, all winning bidders pay for the items at the same price (the highest losing price). We will see later that the U.S. Treasury Department is experimenting with this type of auction to sell the national debt.

One wonders why any seller would choose this method to auction goods. It seems obvious that a seller would make more money by using a first-price auction, but, in fact, that has been shown to be untrue. Bidders fully understand the rules and modify their bids as circumstances dictate. In the case of a Vickrey auction, bidders adjust upward. No one is deterred out of fear that he will pay too high a price. Aggressive bidders receive sure and certain awards but pay a price closer to market consensus. The price that winning bidder pays is determined by competitors' bids alone and does not depend upon any action the bidder undertakes. Less bid shading occurs because people don't fear winner's curse. Bidders are less inclined to compare notes before an auction.

This type of auction has been used in former Czechoslovakia to refinance credit and in Guinea, Nigeria, and Uganda for foreign exchange.

What about changing the format just a little and having a second-price, open-outcry auction? In such a case, participants could bid in the ascending format and the winner would ultimately pay the price of the second-highest bid. One might imagine that such an auction would have much the same results as an English (ascending, open-outcry) auction, but, in fact, an auction like that would be easy to manipulate. Imagine bidder A bidding $25 for an item worth $100. Some other bidder could quite easily and safely bid $750, knowing that no one will bid more and that he will only pay $25. Clearly it is imperative to seal the bid.

Characteristics of Different Types of Auctions

Type

Rules

English, or ascending-price. Open.

Seller announces reserve price or some low opening bid. Bidding increases progressively until demand falls. Winning bidder pays highest valuation. Bidder may re-assess evaluation during auction.

Dutch, or descending-price. Open.

Seller announces very high opening bid. Bid is lowered progressively until demand rises to match supply.

First-price, sealed bid. Known as discriminatory auction when multiple items are being auctions.

Bids submitted in written form with no knowledge of bids of others. Winner pays the exact amound he bid.

Vickrey auction or second-price sealed bid. Known as uniform-price auction when multiple items are being auctioned.

Bids submitted in written form with no knowledge of the bids of others. Winner pays the second-highest amount bid.

Auction Strategy

The truth is that the entire subject of auction strategy is numbingly complex with numerous variables coming into play. Is a bidder risk-averse or risk-neutral? Is the auction for one item or multiple units? Do you plan to resell the acquired object or use it yourself? If you plan to resell it, are the other bidders symmetric? That is, do they use the same measurements to estimate their valuations? Do you have secret information about the object? Might others have secret information?

All of these factors play a part in auction strategy, and so this section can only provide an overview with a few assorted general ideas. Those readers wishing to know more are invited to read the bibliography which contains excellent references, and those references in turn point to other more technical papers.

It is safe to make a few general remarks.

Buyers really do bid differently depending upon the rules of an auction, and it is worth understanding the rules of an auction thoroughly. In fact, the one piece of information available to all is the rules.

Economists use a framework called game theory to think about auction behavior. Using game theory economists examine rational behavior and decisions made in varying conditions.

A seller, on the one hand, is faced with choosing an auction type, and so he must predict the behavior of bidders. On the other hand, a bidder tries to predict the behavior of the other bidders. Each bidder makes an estimate of his own value of the object and also an estimate of what others will bid on it. Good bidding is often the result of correct predictions about the behavior of others and sometimes that means guessing the extent of someone else's information correctly. [Mester]

Economists try to devise sets of rules to determine dominant strategies under a huge array of variables. Bidders, of course, tend to worry more about their bids than their strategy.

From a Seller's Perspective

In any auction a seller can influence results by revealing information about the object. Intuitively, a bidder's profits rise when he can exploit information asymmetries (when the bidder has information not available to others). In general, the more information a bidder has, the more the price-dampening effect of winner's curse is lessened. So a seller's optimal strategy is to reveal information and to link the final price to outside indicators of value (an authoritative evaluation). It is also a good idea because if a seller seems reluctant to disclose something, a buyer always assumes the hidden information must be unfavorable.

Revealing information removes uncertainty.

Theoretical literature demonstrates that, under assumption of private value, all four basic auction types can be shown to yield the same expected price and revenue to the seller when bidders are risk neutral and symmetric. This implies that auction choice is not crucial because each format yields on average the same payoff.

But revenue equivalence does not hold true under common value assumption (when bidders have similar evaluations). It has been shown that the four auction types can be ranked from highest to lowest as follows: English ascending-price; the second-price, sealed bid auction; Dutch (descending) auctions and first-price, sealed bid auctions tied. The rankings illustrate advantages of increased information. Remember that an English auction reveals information about rival bidder valuations and permits dynamic updating of personal valuation (which leads to more aggressive bidding). In comparison, bidders, recognizing winner's curse, bidding in first-price auctions bid less aggressively and shade their bids. Similar reasoning applies to Dutch (descending) auctions. In contrast, in second-price sealed-bid format, the winner pays the bid of the next highest so bidders raise bids, secure that they will not be disadvantaged if rival bids are lower.

In Dutch and first-price auctions, bidders behave in the same way, and so, it does not matter which of these auctions a seller chooses nor does it matter whether the bidders have private values or common values. The reason that a bidder behaves the same in both kinds of auction is that he makes the same decision and this decision is based upon the same information. In both auctions a bidder knows that if he wins he must pay exactly what he bid. He knows also that he only wins if his bid is higher than that of everyone else. He must also decide upon his bid without knowing what others will do.

There is disagreement over this. Paul Milgrom [Milgrom (3)] argues that in general an English auction generates more money in more environments than the Dutch or sealed bid auction types (on average), and this probably helps explain its popularity.

In the case of choosing between a second-price and an English auction, however, the decision must be based upon whether bidders know their private valuations or whether they are uncertain about the single common value of the item for sale. In an auction wherein bidders have independent private values, the auctions both yield the same.

However, in a common value auction the English and second-price auctions do not yield the same revenue. Remember that in an English auction a bidder can gain useful information by observing other bidders. He can watch to see how many bidders drop out of the auction (they value the object less) and he can see also exactly when they dropped out (how high was their last bid?). If lots of bidders remain in an auction this gives a bidder confidence that a high valuation was correct, and so he tends to bid higher.

Expected Revenue

(This chart taken from "Going, Going, Gone", by Loretta J. Mester) [Mester]

But the risk characteristics of a bidder are important too. A bidder who is risk averse (meaning he absolutely requires the item being auctioned) tends to bid higher so that he will have a greater chance of victory. A risk neutral bidder does not.



In the independent value behavior when all bidders are risk neutral, a seller receives the same revenue from both the English and second-price auction.

However, if the bidders are risk averse, then the first-price (and also the Dutch) yields greater revenue than English and second-price auctions.

Expected Revenues

(This chart taken from "Going, Going Gone", by Loretta J. Mester) [Mester]

From a Bidder's Perspective

Theoretical literature assumes that auction participants are homogeneous (risk neutral and symmetric--they use the same distribution function to estimate valuations). It assumes bidders all focus on maximizing profits and that only one item is being auctioned.

Paul Milgrom [Milgrom (1)] describes a strategy for contract bidding. "To make money in competitive bidding, you will need to mark up your bids twice: once to correct for the underestimation of costs on the projects you win and a second time to include a margin for profit. Don't let the presence of several competing bidders push you into making too aggressive a bid. The markup to adjust for underestimation will have to be larger the larger the number of your competitors and the more you respect the accuracy of their cost estimation; you may, however, want to make the profit markup smaller when there are more competitors."

Students point out that you can't make money if you are too cautious. Milgrom [Milgrom (1)] says in response, "The most important lessons to be learned from both the theory and the experiments are that the returns in bidding come from cost and information advantages, that naive bidding strategies can squander these advantages, and that bidders without some advantage have little hope of earning much profit, but could with a little bit of carelessness suffer large losses."

What about irrationality amongst bidders? What if Louise, a bidder, understands winner's curse, but her opponents, Ellen and Sam do not? What should Louise do? She knows that her rivals will overbid. Answer: she should scale down her bid even further because winner's curse is intensified against over-optimistic rivals. If Louise wins against a rival who commonly overbids, Louise has probably erred in her valuation.

A bidder uses the same strategy in both Dutch and first-price sealed auctions because the same information is available in each. In a Dutch auction, a bidder considers and selects a cutoff price. He will claim the object if no one has claimed it before. In a sealed-bid first price, the decision is exactly the same. All bidders have the same strategies for these auctions--shade bids down slightly so as not to be caught by winner's curse.

English Strategy

In a private-value English auction, a player's best strategy is to bid a small amount more than the previous high bid until he reaches his valuation and then stop. This is optimal because he always wants to buy an object if the price is less than its value to him, but he wants to pay the lowest possible price. Bidding always ends when the price reaches the valuation of the player with the second-highest valuation.

An advantage to English auctions is that a bidder gains information. He can observe and see not only that other players drop out, but also the price at which the competition abandons the bidding. That tells a bidder a great deal about the valuations of others and allows a bidder to revise his valuation on the fly.

A player's strategy is his series of bids as a function of (1) his value, (2) his prior estimate of the other players' valuations, and (3) the past bids of other players. His bid can be updated as information changes.

If you attend an auction in person, it is good to remember that auctioneers sometime appreciate the first bid on an item because it helps get the auction started. Sometimes they show their appreciation by giving a first bidder what is called a "fast knock". From the point of view of the auctioneer, a fast knock is a calculated sacrifice, something akin to a loss-leader in a department store sale.

Another point to remember is that some people are intimidated by rings, but you can always outbid a ring. Their strategy has to be based upon buying an item at a low enough price to make a profit.

Dutch Strategy

The problem for the bidder in a Dutch auction is exactly the same as that facing a bidder in a sealed-bid auction. At some point in advance, the bidder must decide the maximum amount he will bid. He must decide when to stop the auction based upon his own valuation of the object and his prior beliefs about the valuations of other bidders.

This auction type is strategically equivalent to first-price sealed auction because no relevant information is disclosed in the course of the auction, only at the end when it is too late.

First-Price, Sealed Bid Strategy

It is difficult to specify a single strategy because a profit-maximizing bid depends upon the actions of others. The tradeoff is between bidding high and winning more often, and bidding low and benefiting more if the bid wins (bigger profit margin).

Most bidders attempt to shade their bids to move closer to market consensus. This also helps to avoid winner's curse.

Vickrey Strategy

Paul Milgrom [Milgrom (1)] suggests that the dominant strategy for a bidder in a Vickrey (second-price) auction is to submit a bid equal to his true reservation price because he then accepts all offers below his reservation bid and none that are above. A participant who bids less is more likely to lose the auction and all that strategy accomplishes is to lower the chance of victory. Bidding high carries the risk of winner's curse. Neither affects the price paid if he wins.

When each bidder adopts a strategy of bidding his true price, the outcome is that the item is awarded to the bidder with the highest valuation at a price equal to the second highest valuation. The existence of a dominant strategy means that bidder can determine his own sealed bid without regard for the actions of others. So a second-price auction duplicates the principal characteristics of an English auction. It should be noted that a second-price auction participant chooses a strategy without regard for the actions of others.

A potential drawback is that this system requires total honesty from the auctioneer (s). If the auctioneer is not trustworthy, he could open the bids, find the winner and insert a new bid just barely under that to ensure higher revenues.

 


Cooperative Games

All of the examples so far have focused on non-cooperative solutions to "games." We recall that there is, in general, no unique answer to the question "what is the rational choice of strategies?" Instead there are at least two possible answers, two possible kinds of "rational" strategies, in non-constant sum games. Often there are more than two "rational solutions," based on different definitions of a "rational solution" to the game. But there are at least two: a "non-cooperative" solution in which each person maximizes his or her own rewards regardless of the results for others, and a "cooperative" solution in which the strategies of the participants are coordinated so as to attain the best result for the whole group. Of course, "best for the whole group" is a tricky concept -- that's one reason why there can be more than two solutions, corresponding to more than concept of "best for the whole group."

Without going into technical details, here is the problem: if people can arrive at a cooperative solution, any non-constant sum game can in principle be converted to a win-win game. How, then, can a non-cooperative outcome of a non-constant sum game be rational? The obvious answer seems to be that it cannot be rational: as Anatole Rapoport argued years ago, the cooperative solution is the only truly rational outcome in a non-constant sum game. Yet we do seem to observe non-cooperative interactions every day, and the "noncooperative solutions" to non-constant sum games often seem to be descriptive of real outcomes. Arms races, street congestion, environmental pollution, the overexploitation of fisheries, inflation, and many other social problems seem to be accurately described by the "noncooperative solutions" of rather simple nonconstant sum games. How can all this irrationality exist in a world of absolutely rational decision makers?

 

Credible Commitment

There is a neoclassical answer to that question. The answer has been made explicit mostly in the context of inflation. According to the neoclassical theory, inflation happens when the central bank increases the quantity of money in circulation too fast. Then, the solution to inflation is to slow down or stop increasing in the quantity of money. If the central bank were committed to stopping inflation, and businessmen in general knew that the central bank were committed, then (according to neoclassical economics) inflation could be stopped quickly and without disruption. But, in a political world, it is difficult for a central bank to make this commitment, and businessmen know this. Thus the businessmen have to be convinced that the central bank really is committed -- and that may require a long period of unemployment, sky-high interest rates, recession and business failures. Therefore, the cost of eliminating inflation can be very high -- which makes it all the more difficult for the central bank to make the commitment. The difficulty is that the central bank cannot make a credible commitment to a low-inflation strategy.

Evidently (as seen by neoclassical economics) the interaction between the central bank and businessmen is a non-constant sum game, and recessions are a result of a "noncooperative solution to the game." This can be extended to non-constant sum games in general: noncooperative solutions occur when participants in the game cannot make credible commitments to cooperative strategies. Evidently this is a very common difficulty in many human interactions.

Games in which the participants cannot make commitments to coordinate their strategies are "noncooperative games." The solution to a "noncooperative game" is a "noncooperative solution." In a noncooperative game, the rational person's problem is to answer the question "What is the rational choice of a strategy when other players will try to choose their best responses to my strategy?"

Conversely, games in which the participants can make commitments to coordinate their strategies are "cooperative games," and the solution to a "cooperative game" is a "cooperative solution." In a cooperative game, the rational person's problem is to answer the question, "What strategy choice will lead to the best outcome for all of us in this game?" If that seems excessively idealistic, we should keep in mind that cooperative games typically allow for "side payments," that is, bribes and quid pro quo arrangements so that every one is (might be?) better off. Thus the rational person's problem in the cooperative game is actually a little more complicated than that. The rational person must ask not only "What strategy choice will lead to the best outcome for all of us in this game?" but also "How large a bribe may I reasonably expect for choosing it?"

 

A Basic Cooperative Game

Cooperative games are particularly important in economics. Here is an example that may illustrate the reason why. We suppose that Joey has a bicycle. Joey would rather have a game machine than a bicycle, and he could buy a game machine for $80, but Joey doesn't have any money. We express this by saying that Joey values his bicycle at $80. Mikey has $100 and no bicycle, and would rather have a bicycle than anything else he can buy for $100. We express this by saying that Mikey values a bicycle at $100.

The strategies available to Joey and Mikey are to give or to keep. That is, Joey can give his bicycle to Mikey or keep it, and Mikey can give some of this money to Joey or keep it all. it is suggested that Mikey give Joey $90 and that Joey give Mikey the bicycle. This is what we call "exchange." Here are the payoffs:

Table 12-1

 

 

Joey

 

 

give

keep

Mikey

give

110, 90

10, 170

keep

200, 0

100, 80

EXPLANATION: At the upper left, Mikey has a bicycle he values at $100, plus $10 extra, while Joey has a game machine he values at $80, plus an extra $10. At the lower left, Mikey has the bicycle he values at $100, plus $100 extra. At the upper left, Joey has a game machine and a bike, each of which he values at $80, plus $10 extra, and Mikey is left with only $10. At the lower right, they simply have what they begin with -- Mikey $100 and Joey a bike.

If we think of this as a noncooperative game, it is much like a Prisoners' Dilemma. To keep is a dominant strategy and keep, keep is a dominant strategy equilibrium. However, give, give makes both better off. Being children, they may distrust one another and fail to make the exchange that will make them better off. But market societies have a range of institutions that allow adults to commit themselves to mutually beneficial transactions. Thus, we would expect a cooperative solution, and we suspect that it would be the one in the upper left. But what cooperative "solution concept" may we use?

 

Pareto Optimum

We have observed that both participants in the bike-selling game are better off if they make the transaction. This is the basis for one solution concept in cooperative games.

First, we define a criterion to rank outcomes from the point of view of the group of players as a whole. We can say that one outcome is better than another (upper left better than lower right, e.g) if at least one person is better off and no-one is worse off. This is called the Pareto criterion, after the Italian economist and mechanical engineer, Vilfredo Pareto. If an outcome (such as the upper left) cannot be improved upon, in that sense -- in other words, if no-one can be made better off without making somebody else worse off -- then we say that the outcome is Pareto Optimal, that is, Optimal (cannot be improved upon) in terms of the Pareto Criterion.

If there were a unique Pareto optimal outcome for a cooperative game, that would seem to be a good solution concept. The problem is that there isn't -- in general, there are infinitely many Pareto Optima for any fairly complicated economic "game." In the bike-selling example, every cell in the table except the lower right is Pareto-optimal, and in fact any price between $80 and $100 would give yet another of the (infinitely many) Pareto-Optimal outcomes to this game. All the same, this was the solution criterion that von Neumann and Morgenstern used, and the set of all Pareto-Optimal outcomes is called the "solution set."

 

Alternative Solution Concepts

If we are to improve on this concept, we need to solve two problems. One is to narrow down the range of possible solutions to a particular price or, more generally, distribution of the benefits. This is called "the bargaining problem." Second, we still need to generalize cooperative games to more than two participants. There are a number of concepts, including several with interesting results; but here attention will be limited to one. It is the Core, and it builds on the Pareto Optimal solution set, allowing these two problems to solve one another via "competition."

An Information Technology Example Revisited

When we looked at "Choosing an Information Technology," one of the two introductory examples, we came to the conclusion that it is more complex than the Prisoners' Dilemma in several ways. Unlike the Prisoners' Dilemma, it is a cooperative game, not a noncooperative game. Now let's look at it from that point of view.

When the information system user and supplier get together and work out a deal for an information system, they are forming a coalition in game theory terms. (Here we have been influenced more by political science than economics, it seems!) The first decision will be whether to join the coalition or not. In this example, that's a pretty easy decision. Going it alone, neither the user nor the supplier can be sure of a payoff more than 0. By forming a coalition, both choosing the advanced system, they can get a total payoff of 40 between them. The next question is: how will they divide that 40 between them? How much will the user pay for the system? We need a little more detail about this game before we can go on. The payoff table above was net of the payment. It was derived from the following gross payoffs:

Table A-2

 

 

User

 

 

Advanced

Proven

Supplier

Advanced

-50,90

0,0

Proven

0,0

-30,40

The gross payoffs to the supplier are negative, because the production of the information system is a cost item to the supplier, and the benefits to the supplier are the payment they get from the user, minus that cost. For Table A-1, I assumed a payment of 70 for an advanced or 35 for a proven system. But those are not the only possibilities in either case.

How much will be paid? Here are a couple of key points to move us toward an answer:

 

Using that information, we get Figure A-1:

Figure A-1

The diagram shows the net payoff to the supplier on the horizontal axis and the net payoff to the user on the horizontal axis. Since the supplier will not agree to a payment that leaves her with a loss, only the solid green diagonal line -- corresponding to total payoffs of 40 to the two participants -- will be possible payoffs. But any point on that solid line will satisfy the two points above. In that sense, all the points on the line are possible "solutions" to the cooperative game, and von Neumann and Morgenstern called it the "solution set."

But this "solution set" covers a multitude of sins. How are we to narrow down the range of possible answers? There are several possibilities. The range of possible payments might be influenced, and narrowed, by:

There are game-theoretic approaches based on all these approaches, and on combinations of them. Unfortunately, this leads to several different concepts of "solutions" of cooperative games, and they may conflict. One of them -- the core, based on competitive pressures -- will be explored in these pages. We will have to leave the others for another time.

There is one more complication to consider, when we look at the longer run. What if the supplier does not continue to support the information system chosen? What if the supplier invests to support the system in the long run, and the user doesn't continue to use it? In other words, what if the commitments the participants make are limited by opportunism?

Cooperative Games and the Core

Language for Cooperative Games

We will need a bit of language to talk about cooperative games with more than two persons. A group of players who commit themselves to coordinate their strategies is called a "coalition." What the members of the coalition get, after all the bribes, side payments, and quids pro quo have cleared, is called an "allocation" or "imputation."

(The problem of coalitions also arises in zero-sum games, if there are more than two players. With three or more players, some of the players may profit by "ganging up" on the rest. For example, in poker, two or more players may cooperate to cheat a third, dividing the pelf between themselves. This is cheating, in poker, because the rules of poker forbid cooperation among the players. For the members of a coalition of this kind, the game becomes a non-zero sum game -- both of the cheaters can win, if they cheat efficiently).

"Allocation" and "imputation" are economic terms, and economists are often concerned with the efficiency of allocations. The standard definition of efficient allocation in economics is "Pareto optimality." Let us pause to recall that concept. In defining an efficient allocation, it is best to proceed by a double-negative. An allocation is inefficient if there is at least one person who can do better, while no other person is worse off. (That makes sense -- if somebody can do better without anyone else being made worse off, then there is an unrealized potential for benefits in the game). Conversely, the allocation is efficient in the Paretian sense if no-one can be made better off without making someone else worse off.

The members of a coalition, C, are a subset of the set of players in the game. (Remember, a "subset" can include all of the players in the game. If the subset is less than the whole set of players in the game, it is called a "proper" subset). If all of the players in the game are members of the coalition, it is called the "grand" coalition. A coalition can also have only a single member. A coalition with just a single member is called a "singleton coalition."

Let us say that the members of coalition C get payoff A. (A is a vector or list of the payoffs to all the members of C, including side payments, if any). Now suppose that some of the members of coalition C could join another coalition, C'; with an allocation of payoffs A'. The members of C who switch to C' may be called "defectors." If the payoffs to defectors in A' are greater than those in A, then we say that A' "dominates" A through coalition C. In other words: an allocation is dominated if some of the members of the coalition can do better for themselves by deserting that coalition for some other coalition.

The Core

We can now define one important concept of the solution of a cooperative game. The core of a cooperative game consists of all undominated allocations in the game. In other words, the core consists of all allocations with the property that no subgroup within the coalition can do better by deserting the coalition.

Notice that an allocation in the core of a game will always be an efficient allocation. Here, again, the best way to show this is to reason in double-negatives -- that is, to show that an inefficient allocation cannot be in the core. To say that the allocation A is inefficient is to say that a grand coalition can be formed in which at least one person is better off, and no-one worse off, than they are in A. Thus, any inefficient coalition is dominated through the grand coalition.

Now, two very important limitations should be mentioned. The core of a cooperative game may be of any size -- it may have only one allocation, or there may be many allocations in the core (corresponding either to one or many coalitions), and it is also possible that there may not be any allocations in the core at all. What does it mean to say that there are no allocations in the core? It means that there are no stable coalitions -- whatever coalition may be formed, there is some subgroup that can benefit by deserting it. A game with no allocations in the core is called an "empty-core game."

I said that the rational player in a cooperative game must ask "not only 'What strategy choice will lead to the best outcome for all of us in this game?' but also 'How large a bribe may I reasonably expect for choosing it?'" The core concept answers this question as follows" "Don't settle for a smaller bribe than you can get from an other coalition, and don't make any commitments until you are sure."

We will now consider two applications of the concept of the core. The first is a "market game," a game of exchange. We then return to a game we have looked at from the noncooperative viewpoint: the queuing game.

A Market Game

Economists often claim that "increasing competition" (an increasing number of participants on both sides of the market, demanders and suppliers) limits monopoly power. Our market game is designed to bring out that idea.

The concept of the core, and the effect of "increasing competition" on the core, can be illustrated by a fairly simple numerical example, provided we make some simplifying assumptions. We will assume that there are just two goods: "widgets" and "money." We will also use what I call the benefits hypothesis -- that is, that utility is proportional to money. In other words, we assume that the subjective benefits a person obtains from her or his possessions can be expressed in money terms, as is done in cost-benefit analysis. In a model of this kind, "money" is a stand-in for "all other goods and services." Thus, people derive utility from holding "money," that is, from spending on "all other goods and services," and what we are assuming is that the marginal utility of "all other goods and services" is (near enough) constant, so that we can use equivalent amounts of "money" or "all other goods and services" as a measure of the utility of widgets. Since money is transferable, that is very much like the "transferable utility" conception originally used by Shubik in his discussions of the core.

We will begin with an example in which there are just two persons, Jeff and Adam. At the beginning of the game, Jeff has 5 widgets but no money, and Adam has $22 but no widgets. The benefits functions are

 

Table 13-1

Jeff

 

Adam

widgets

benefits

 

widgets

benefits

 

total

marginal

 

 

total

marginal

1

10

10

 

1

9

9

2

15

5

 

2

13

4

3

18

3

 

3

15

2

4

21

3

 

4

16

1

5

22

1

 

5

16

0

Adam's demand curve for widgets will be his marginal benefit curve, while Jeff's supply curve will be the reverse of his marginal benefit curve. These are shown in Figure 13-1.

Figure 13-1

Market equilibrium comes where p=3, Q=2, i.e. Jeff sells Adam 2 widgets for a total payment of $6. The two transactors then have total benefits of

 

Jeff

Adam

widgets

18

13

money

6

16

total

24

29

The total benefit divided between the two persons is $24+$29=$53.

Now we want to look at this from the point of view of the core. The "strategies' that Jeff and Adam can choose are unilateral transfers -- Jeff can give up 0, 1, 2, 3, 4, or 5 widgets, and Adam can give up from 0-22 dollars. Presumably both would choose "zero" in a noncooperative game. The possible coalitions are a) a grand coalition of both persons, or b) two singleton coalitions in which each person goes it alone. In this case, a cooperative solution might involve a grand coalition of the two players. In fact, a cooperative solution to this game is a coordinated pair of strategies in which Jeff gives up some widgets to Adam and Adam gives up some money to Jeff. (In more ordinary terms, that is, of course, a market transaction). The core will consist of all such coordinated strategies such that a) neither person (singleton coalition) can do better by going it alone, and b) the coalition of the two cannot do better by a different coordination of their strategies. In this game, the core will be a set of transactions each of which fulfills both of those conditions.

Let us illustrate both conditions: First, suppose Jeff offers to sell Adam one widget for $10. But Adam's marginal benefit is only nine -- Adam can do better by going it alone and not buying anything. Thus, "one widget for $10" is not in the core. Second, suppose Jeff proposes to sell Adam one widget for $5. Adam's total benefit would then be 22-5+9=26, Jeff's 5+21=26. Both are better off, with a total benefit of 52. However, they can do better, if Jeff now sells Adam a second widget for $3.50. Adam now has benefits of 13+22-8.50=26.50, and Jeff has benefits of 18+8.50=26.50, for a total benefit of 53. Thus, a sale of just one widget is not in the core. In fact, the core will include only transactions in which exactly two widgets are sold.

We can check for this in the following way. If the "benefits hypothesis" is correct, the only transactions in the core will be transactions that maximize the total benefits for the two persons. When the two person shift from a transaction that does not maximize benefits to one that does, they can divide the increase in benefits among themselves in the form of money, and both be better off -- so a transaction that does not maximize benefits cannot satisfy condition b) above. From Table 13-1,

Table 13-2

Quantity Sold

benefit of widgets

money

total

 

to Jeff

to Adam

 

 

0

22

0

22

44

1

21

9

22

52

2

18

13

22

53

3

15

15

22

52

4

10

16

22

48

5

0

16

22

38

and we see that a trade of 2 maximizes total benefits.

But we have not figured out the price at which the two units will be sold. This is not necessarily the competitive "supply-and-demand" price, since the two traders are both monopolists and one may successfully hold out for a better-than-competitive price.

Here are some examples:

Quantity Sold

Total Payment

Total Benefits

 

 

Jeff's

Adam's

2

12

18+12=30

22-12+13=22

2

5

18+5=23

22-5+13=30

2

8

18+8=28

22-8+13=27

What all of these transactions have in common is that the total benefits are maximized -- at 53 -- but the benefits are distributed in very different ways between the two traders. All the same, each trader does no worse than the 22 of benefits he can have without trading at all. Thus each of these transactions is in the core.

It will be clear, then, that there are a wide range of transactions in the core of this two-person game. We may visualize the core in a diagram with the benefits to Jeff on the horizontal axis and benefits to Adam on the vertical axis. The core then is the line segment ab. Algebraically, it is the line BA=53-BJ, where BA means "Adam's benefits" and BJ means "Jeff's benefits," and the line is bounded by BA>=22 and BJ>=22. The competitive equilibrium is at C.

Figure 13-2

The large size of the core is something of a problem. The cooperative solution must be one of the transactions in the core, but which one? In the two-person game, there is just no answer. The "supply-and-demand" approach does give a definite answer, shown as point C in the diagram. According to the supply-and-demand story, this equilibrium comes about because there are many buyers and many sellers. In our example, instead, we have just one of each, a bilateral monopoly. That would seem to be the problem: the core is large because the number of buyers and sellers is small.

So what happens if we allow the number of buyers and sellers to increase until it is very large? To keep things simple, we will continue to suppose that there are just two kinds of people -- jeffs and adams -- but we will consider a sequence of games with 2, 3, ..., 10, ..., 100,... adams and an equal number of jeffs and see what happens to the core of these games as the number of traders gets large.

First, suppose that there are just two jeffs and two adams. Each jeff and each adam has just the same endowment and benefit function as before.

What coalitions are possible in this larger economy? There could be two one-to-one coalitions of a jeff and an adam. Two jeffs or two adams could, in principle, form a coalition; but since they would have nothing to exchange, there would be little point in it. There could also be coalitions of two jeffs and an adam, two adams and a jeff, or a grand coalition of both jeffs and both adams.

We want to show that this bigger game has a smaller core. There are some transactions in the core of the first game that are not in this one.

Here is an example: In the 2-person game, an exchange of 12 dollars for 2 widgets is in the core. But it is not in the core of this game. At an exchange of 12 for 2, each adam gets total benefits of 23, each jeff or 30. Suppose then that a jeff forms a coalition with 2 adams, so that the jeff sells each adam one widget for $7. The jeff gets total benefits of 18+7+7=32, and so is better off. Each adam gets benefits of 15+9=24, and so is better off. This three-person coalition -- which could not have been formed in the two-person game -- "dominates" the 12-for-2 allocation and so the 12-for-2 allocation is not in the core of the 4-person game. (Of course, the other jeff is out in the cold, but that's his look-out -- the three-person coalition are better off. But, in fact, we are not saying that the three-person coalition is in the core either. It probably isn't, since the odd jeff out is likely to make an offer that would dominate this one).

This is illustrated by the diagram in Figure 13-3. Line segment de shows the trade-off between benefits to the jeffs and the adams in a 3-person coalition. It means that, from any point on line segment fb, a shift to a 3-person coalition makes it possible to move to the northwest -- making all members of the coalition better off -- to a point on fe. Thus all of the allocations on fb are dominated, and not in the core of the 4-person game.

Figure 13-3

Here is another example: in the two-person game, an exchange of two widgets for five dollars is in the core. Again, it will not be in the core of a four-person game. Each jeff gets benefits of 23 and each adam of 30. Now, suppose an adam proposes a coalition with both jeffs. The adam will pay each jeff $2.40 for one widget. The adam then has 30.20 of benefits and so is better off. Each jeff gets 23.40 of benefits and is also better off. Thus the one-adam-and-two-jeffs coalition dominates the 2-for-5 coalition, which is no longer in the core. Figure 13-4 illustrates the situation we now have. The benefit trade-off for a 2-jeff-one-adam coalition is shown by line gj. Every allocation on ab to the left of h is dominated. Putting everything together, we see that allocations on ab to the left of h and to the right of f are dominated by 3-person coalitions, but the 3-person coalitions are dominated by the 2-person coalitions between h and f. (Four-person coalitions function like pairs of two-person coalitions, adding nothing to the game).

Figure 13-4

We can now see the core of the four-person game in Figure 13-4. It is shown by the line segment hf. It is limited by BA>=27, BJ>=24. The core of the four-person game is part of the core of the two-person game, but it is a smaller part, because the four-person game admits of coalitions which cannot be formed in the two-person game. Some of these coalitions dominate some of the coalitions in the core of the smaller game. This illustrates an important point about the core. The bigger the game, the greater the variety of coalitions that can be formed. The more coalitions, often, the smaller the core.

Let us pursue this line of reasoning one more step, considering a six-person game with three jeffs and three adams. We notice that a trade of two widgets for $8 is in the core of the four-person game and we will see that it is not in the core of the 6-person game. Beginning from the 2-for-8 allocation, a coalition of 2 jeffs and 3 adams is proposed, such that each jeff gives up three widgets and each adam buys two, at a price of 3.50 each. The results are shown in Table 13-3

Table 13-3

Type

old allocation

new allocation

 

widgets

money

total benefit

widgets

money

total benefit

Jeff

4

8

26

3

11.4

26.4

Adam

2

14

27

2

14.4

27.4

We see that both the adams and the jeffs within the coalition are better off, so the two-and-three coalition dominates the two-for-eight bilateral trade. Thus the two-for-eight trade is not in the core of the six-person game.

What is in it? This is shown by Figure 13-5. As before, the line segment ab is the core of the two-person game and line segment gj is the benefits trade-off for the coalition of two jeffs and one adam. Segment kl is the benefits trade-off for the coalition of of two jeffs and thee adams. We see that every point on a, b except point h is dominated, either by a 2-jeff 1-adam coalition or by a two-jeff three-adam coalition. The core of a six-player game is exactly one allocation: the one at point h. And this is the competitive equilibrium! No coalition can do better than it.

Figure 13-5

If we were to look at 8, 10, 100, 1000, or 1,000,000 player games, we would find the same core. This series of examples illustrates a key point about the core of an exchange game: as the number of participants (of each type) increases without limit, the core of the game shrinks down to the competitive equilibrium. This result can be generalized in various ways. First, we should observe that in some games, any game with a finite number of players has more than one allocation in the core. This game has been simplified by only allowing players to trade in whole numbers of widgets. That is one reason why the core shrinks to the competitive equilibrium so soon in our example. We may also eliminate the benefits hypothesis, assuming instead that utility is nontransferable and not proportional to money. We can also allow for more than two kinds of players, and get rid of the "types" assumption completely, at the cost of much greater mathematical complexity. But the general idea is simple enough. With more participants, more kinds of coalitions can form, and some of those coalitions dominate coalitions that could form in smaller games. Thus a bigger game will have a smaller core; in that sense "more competition limits monopoly power." But (in a market game) the supply-and-demand is the one allocation that is always in the core. And this provides us with a new understanding of the unique role of the market equilibrium.

The Queuing Game and the Core

We have seen that the market game has a non-empty core, but some very important games have empty cores. From the mathematical point of view, this seems to be a difficulty -- the problem has no solution. But from the economic point of view it may be an important diagnostic point. The University of Chicago economist Lester Telser has argued that empty-core games provide a rationale for government regulation of markets. The core is empty because efficient allocations are dominated -- people can defect to coalitions that can promise them more than they can get from an efficient allocation. What government regulation does in such a case is to prohibit some of the coalitions. Ruling out some coalitions by means of regulation may allow an efficient coalition to form and to remain stable -- the coalitions through which it might be dominated are prohibited by regulation.

In another segment, we have looked at a game that has an inefficient noncooperative equilibrium: the queuing game. We shall see that the Queuing Game also is an empty-core game. Recalling that every allocation in the core is Pareto Optimal, and that Pareto Optimality in this game presupposes a grand coalition of all players to refrain from starting a queue, it will suffice to show that the grand coalition is unstable against a defection of a single agent to form a singleton coalition and form a one-person queue.

It is easy to see that the defector will be better off if the rump coalition (the five remaining in a coalition not to queue) continues its strategy of not contesting for any place in line. Then the defector gets a net payoff of 18 with certainty, better than the average payoff of 12.5 she would get with the grand coalition -- and this observation is just a repetition of the argument that the grand coalition is not a Nash equilibrium. But the rump coalition needs not simply continue with its policy of noncontesting. For example, it can contest the first position in line, while continuing the agreement to allocate the balance at random. This would leave the aggressor with a one-sixth chance of the first place, but she can do no worse than second, so her expected payoff would then be (1/6)(18)+(5/6)(15)=15.5. She will not be deterred from defecting by this possible strategy response from the rump coalition.

That is not the only strategy response open to the rump coalition. Table 13-4 presents a range of strategy alternatives available to the rump coalition:

Table 13-4

contest the first

payoff to
defector

average
payoff
to rump
coalition

no places

18

11

one place

15.5

11.167

two places

13.5

11.223

three places

12

11.2

four places

11.167

11.167

five places

11.167

11.167

These are not the only strategy options available to the rump coalition. For example, the rump coalition might choose to contest just the first and third positions in line, leaving the second uncontested. But this would serve only to assure the defector of a better outcome than she could otherwise be sure of, making the members of the rump coalition worse off. Thus, the rump coalition will never choose a strategy like that, and it cannot be relevant to the defector's strategy. Conversely, we see that the rump coalition's best response to the defection is to contest the first two positions in line, but no more -- leaving the defector better off as a result of defecting, with an expected payoff of 13.5 rather than 12.5. If follows that the grand coalition is unstable under recontracting.

To illustrate the reasoning that underlies the table, let us compute the payoffs for the case in which the rump coalition contests the first two positions in line, the optimal response. 1) The aggressor has a one/sixth chance of first place in line for a payoff of 18, one/sixth of second place for 15, and four-fifths chance of being third, for 12. (The aggressor must still stand in line to be sure of third place, rather than worse, although that position is uncontested). Thus the expected payoff is 18/6+15/6+4*12/6, for 13.5. 2a) With one chance in six, the aggressor is first, leaving the rump coalition to allocate among themselves rewards of 15 (second place in queue), 14, 11, 8, and 5 (third through last places without standing in the queue). Each of these outcomes has a conditional probability of one-fifth for each of the five individuals in the rump coalition. This accounts for expectations of one in thirty (one-sixth times one-fifth) of each of those rewards. 2b) With one chance in six, the aggressor is second, and the rump coalition allocate among themselves, at random, payoffs of 18 (first place in queue), 14, 11, 8 and 5 (as before) accounting for yet further expectations of one in thirty of each of these rewards. 2c) With four chances in six, the aggressor is third -- without contest -- and the members of the rump coalition allocate among themselves, at random, rewards of 18, 15, (first two places in the queue), 11, 8, and 5 (last three places without queuing). 2d) Thus the expected payoff of a member of the rump coalition is (15+14+11+8+5)/30+(18+14+11+8+5)/30+4*(18+15+11+8+5)/30, or 11.233.